Nov 21 13:32:04 crc systemd[1]: Starting Kubernetes Kubelet... Nov 21 13:32:04 crc restorecon[4759]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:04 crc restorecon[4759]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 21 13:32:05 crc restorecon[4759]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 21 13:32:06 crc kubenswrapper[4904]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:06 crc kubenswrapper[4904]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 21 13:32:06 crc kubenswrapper[4904]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:06 crc kubenswrapper[4904]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:06 crc kubenswrapper[4904]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 21 13:32:06 crc kubenswrapper[4904]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.291884 4904 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.301978 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302014 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302018 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302022 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302027 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302031 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302035 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302038 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302043 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302048 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302052 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302056 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302059 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302063 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302067 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302071 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302074 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302077 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302081 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302084 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302088 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302091 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302094 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302098 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302101 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302105 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302108 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302112 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302118 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302122 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302125 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302133 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302136 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302141 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302144 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302150 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302154 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302158 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302161 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302165 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302168 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302171 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302175 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302179 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302182 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302186 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302189 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302193 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302196 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302200 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302203 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302207 4904 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302210 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302213 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302217 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302220 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302223 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302228 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302232 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302236 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302240 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302245 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302249 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302252 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302256 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302259 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302263 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302266 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302270 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302273 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.302278 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302385 4904 flags.go:64] FLAG: --address="0.0.0.0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302395 4904 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302402 4904 flags.go:64] FLAG: --anonymous-auth="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302408 4904 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302415 4904 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302419 4904 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302426 4904 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302431 4904 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302435 4904 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302440 4904 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302444 4904 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302449 4904 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302453 4904 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302458 4904 flags.go:64] FLAG: --cgroup-root="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302462 4904 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302466 4904 flags.go:64] FLAG: --client-ca-file="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302471 4904 flags.go:64] FLAG: --cloud-config="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302475 4904 flags.go:64] FLAG: --cloud-provider="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302479 4904 flags.go:64] FLAG: --cluster-dns="[]" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302485 4904 flags.go:64] FLAG: --cluster-domain="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302489 4904 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302495 4904 flags.go:64] FLAG: --config-dir="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302500 4904 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302505 4904 flags.go:64] FLAG: --container-log-max-files="5" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302515 4904 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302519 4904 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302523 4904 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302528 4904 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302533 4904 flags.go:64] FLAG: --contention-profiling="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302537 4904 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302542 4904 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302546 4904 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302550 4904 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302555 4904 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302559 4904 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302563 4904 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302567 4904 flags.go:64] FLAG: --enable-load-reader="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302572 4904 flags.go:64] FLAG: --enable-server="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302576 4904 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302582 4904 flags.go:64] FLAG: --event-burst="100" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302586 4904 flags.go:64] FLAG: --event-qps="50" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302590 4904 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302595 4904 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302599 4904 flags.go:64] FLAG: --eviction-hard="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302604 4904 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302608 4904 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302612 4904 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302617 4904 flags.go:64] FLAG: --eviction-soft="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302622 4904 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302625 4904 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302629 4904 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302633 4904 flags.go:64] FLAG: --experimental-mounter-path="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302638 4904 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302642 4904 flags.go:64] FLAG: --fail-swap-on="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302646 4904 flags.go:64] FLAG: --feature-gates="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302665 4904 flags.go:64] FLAG: --file-check-frequency="20s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302669 4904 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302674 4904 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302678 4904 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302682 4904 flags.go:64] FLAG: --healthz-port="10248" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302686 4904 flags.go:64] FLAG: --help="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302691 4904 flags.go:64] FLAG: --hostname-override="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302694 4904 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302699 4904 flags.go:64] FLAG: --http-check-frequency="20s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302703 4904 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302707 4904 flags.go:64] FLAG: --image-credential-provider-config="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302711 4904 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302715 4904 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302720 4904 flags.go:64] FLAG: --image-service-endpoint="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302725 4904 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302729 4904 flags.go:64] FLAG: --kube-api-burst="100" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302733 4904 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302738 4904 flags.go:64] FLAG: --kube-api-qps="50" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302742 4904 flags.go:64] FLAG: --kube-reserved="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302746 4904 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302750 4904 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302755 4904 flags.go:64] FLAG: --kubelet-cgroups="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302759 4904 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302763 4904 flags.go:64] FLAG: --lock-file="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302767 4904 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302771 4904 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302776 4904 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302782 4904 flags.go:64] FLAG: --log-json-split-stream="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302787 4904 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302791 4904 flags.go:64] FLAG: --log-text-split-stream="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302795 4904 flags.go:64] FLAG: --logging-format="text" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302799 4904 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302804 4904 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302808 4904 flags.go:64] FLAG: --manifest-url="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302812 4904 flags.go:64] FLAG: --manifest-url-header="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302817 4904 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302822 4904 flags.go:64] FLAG: --max-open-files="1000000" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302827 4904 flags.go:64] FLAG: --max-pods="110" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302831 4904 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302835 4904 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302839 4904 flags.go:64] FLAG: --memory-manager-policy="None" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302844 4904 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302848 4904 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302852 4904 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302857 4904 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302867 4904 flags.go:64] FLAG: --node-status-max-images="50" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302871 4904 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302875 4904 flags.go:64] FLAG: --oom-score-adj="-999" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302880 4904 flags.go:64] FLAG: --pod-cidr="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302884 4904 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302890 4904 flags.go:64] FLAG: --pod-manifest-path="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302894 4904 flags.go:64] FLAG: --pod-max-pids="-1" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302898 4904 flags.go:64] FLAG: --pods-per-core="0" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302902 4904 flags.go:64] FLAG: --port="10250" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302906 4904 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302911 4904 flags.go:64] FLAG: --provider-id="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302915 4904 flags.go:64] FLAG: --qos-reserved="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302919 4904 flags.go:64] FLAG: --read-only-port="10255" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302923 4904 flags.go:64] FLAG: --register-node="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302927 4904 flags.go:64] FLAG: --register-schedulable="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302931 4904 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302938 4904 flags.go:64] FLAG: --registry-burst="10" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302942 4904 flags.go:64] FLAG: --registry-qps="5" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302946 4904 flags.go:64] FLAG: --reserved-cpus="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302950 4904 flags.go:64] FLAG: --reserved-memory="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302956 4904 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302960 4904 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302965 4904 flags.go:64] FLAG: --rotate-certificates="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302969 4904 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302973 4904 flags.go:64] FLAG: --runonce="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302977 4904 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302981 4904 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302985 4904 flags.go:64] FLAG: --seccomp-default="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302989 4904 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302994 4904 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.302998 4904 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303002 4904 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303006 4904 flags.go:64] FLAG: --storage-driver-password="root" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303011 4904 flags.go:64] FLAG: --storage-driver-secure="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303015 4904 flags.go:64] FLAG: --storage-driver-table="stats" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303020 4904 flags.go:64] FLAG: --storage-driver-user="root" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303024 4904 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303028 4904 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303032 4904 flags.go:64] FLAG: --system-cgroups="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303036 4904 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303043 4904 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303047 4904 flags.go:64] FLAG: --tls-cert-file="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303051 4904 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303057 4904 flags.go:64] FLAG: --tls-min-version="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303061 4904 flags.go:64] FLAG: --tls-private-key-file="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303065 4904 flags.go:64] FLAG: --topology-manager-policy="none" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303069 4904 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303073 4904 flags.go:64] FLAG: --topology-manager-scope="container" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303078 4904 flags.go:64] FLAG: --v="2" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303083 4904 flags.go:64] FLAG: --version="false" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303089 4904 flags.go:64] FLAG: --vmodule="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303094 4904 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303098 4904 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303193 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303198 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303202 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303207 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303212 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303216 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303219 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303223 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303226 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303230 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303234 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303238 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303246 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303250 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303256 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303260 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303265 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303270 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303275 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303279 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303283 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303287 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303291 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303295 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303298 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303302 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303305 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303309 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303313 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303316 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303320 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303323 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303327 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303330 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303334 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303337 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303341 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303344 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303348 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303352 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303355 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303359 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303362 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303366 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303371 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303375 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303378 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303382 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303385 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303389 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303392 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303396 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303399 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303402 4904 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303406 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303409 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303413 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303416 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303420 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303423 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303427 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303430 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303433 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303437 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303440 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303443 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303447 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303451 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303454 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303457 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.303461 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.303475 4904 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.311769 4904 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.311811 4904 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311887 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311896 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311900 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311904 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311908 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311912 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311916 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311922 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311927 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311931 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311935 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311939 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311942 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311946 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311950 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311953 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311957 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311960 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311964 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311967 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311971 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311975 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311978 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311982 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311987 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311992 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.311996 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312001 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312005 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312008 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312012 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312016 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312019 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312023 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312027 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312031 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312034 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312038 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312042 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312045 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312051 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312055 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312059 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312064 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312067 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312071 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312075 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312079 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312084 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312088 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312092 4904 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312096 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312100 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312105 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312108 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312112 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312115 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312119 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312123 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312126 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312130 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312133 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312137 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312141 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312144 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312148 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312151 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312154 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312158 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312162 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312166 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.312173 4904 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312288 4904 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312298 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312302 4904 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312306 4904 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312310 4904 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312314 4904 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312318 4904 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312321 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312325 4904 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312329 4904 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312332 4904 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312336 4904 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312339 4904 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312343 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312346 4904 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312349 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312377 4904 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312386 4904 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312391 4904 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312396 4904 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312400 4904 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312404 4904 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312409 4904 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312413 4904 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312418 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312422 4904 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312426 4904 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312431 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312435 4904 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312439 4904 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312443 4904 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312448 4904 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312452 4904 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312457 4904 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312463 4904 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312467 4904 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312470 4904 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312474 4904 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312478 4904 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312481 4904 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312485 4904 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312489 4904 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312493 4904 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312502 4904 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312509 4904 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312515 4904 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312521 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312527 4904 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312532 4904 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312537 4904 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312542 4904 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312547 4904 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312552 4904 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312557 4904 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312563 4904 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312568 4904 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312573 4904 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312579 4904 feature_gate.go:330] unrecognized feature gate: Example Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312583 4904 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312587 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312592 4904 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312597 4904 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312601 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312605 4904 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312610 4904 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312614 4904 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312618 4904 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312622 4904 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312626 4904 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312630 4904 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.312636 4904 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.312643 4904 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.312866 4904 server.go:940] "Client rotation is on, will bootstrap in background" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.316802 4904 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.316882 4904 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.318265 4904 server.go:997] "Starting client certificate rotation" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.318286 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.318478 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-23 00:46:03.820060965 +0000 UTC Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.318566 4904 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 35h13m57.501499105s for next certificate rotation Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.347373 4904 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.353536 4904 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.370029 4904 log.go:25] "Validated CRI v1 runtime API" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.400875 4904 log.go:25] "Validated CRI v1 image API" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.402580 4904 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.406870 4904 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-21-13-27-23-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.406903 4904 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.433608 4904 manager.go:217] Machine: {Timestamp:2025-11-21 13:32:06.429712047 +0000 UTC m=+0.551244679 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e1db4033-eba5-4a2b-9bc8-5ae38770be76 BootID:a779a9ca-4efd-4ba5-b2c5-671da2b6633b Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d1:ad:68 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d1:ad:68 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4b:69:bb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:10:5f:48 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:83:34:6f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9d:be:35 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:51:58:15 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:e0:50:3e:eb:37 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:22:71:66:0b:74:1c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.434081 4904 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.434298 4904 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.435868 4904 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.436354 4904 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.436409 4904 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.436747 4904 topology_manager.go:138] "Creating topology manager with none policy" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.436764 4904 container_manager_linux.go:303] "Creating device plugin manager" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.437350 4904 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.437403 4904 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.437641 4904 state_mem.go:36] "Initialized new in-memory state store" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.438208 4904 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.443340 4904 kubelet.go:418] "Attempting to sync node with API server" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.443369 4904 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.443392 4904 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.443412 4904 kubelet.go:324] "Adding apiserver pod source" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.443433 4904 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.447799 4904 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.448736 4904 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.450588 4904 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.451050 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.451078 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.451154 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.451234 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.451991 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452016 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452023 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452035 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452046 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452053 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452060 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452069 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452077 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452084 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452106 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.452113 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.453294 4904 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.453714 4904 server.go:1280] "Started kubelet" Nov 21 13:32:06 crc systemd[1]: Started Kubernetes Kubelet. Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.455018 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.454789 4904 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.454826 4904 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.456010 4904 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.456853 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.456901 4904 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.456935 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:56:35.380542301 +0000 UTC Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.456997 4904 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 976h24m28.923548628s for next certificate rotation Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.457171 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.457223 4904 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.457238 4904 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.457316 4904 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.457486 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="200ms" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.466337 4904 factory.go:153] Registering CRI-O factory Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.465059 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.466506 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.467108 4904 factory.go:221] Registration of the crio container factory successfully Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.468535 4904 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.468570 4904 factory.go:55] Registering systemd factory Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.468616 4904 factory.go:221] Registration of the systemd container factory successfully Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.468690 4904 factory.go:103] Registering Raw factory Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.468718 4904 manager.go:1196] Started watching for new ooms in manager Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.468184 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.138:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a08d7ab4984c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 13:32:06.453683392 +0000 UTC m=+0.575215944,LastTimestamp:2025-11-21 13:32:06.453683392 +0000 UTC m=+0.575215944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.469518 4904 server.go:460] "Adding debug handlers to kubelet server" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.470313 4904 manager.go:319] Starting recovery of all containers Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473437 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473493 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473511 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473526 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473539 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473551 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473564 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473578 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473593 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473606 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473619 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473632 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473645 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473707 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473720 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473732 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473743 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473755 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473767 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473780 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473793 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473805 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473837 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473851 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473866 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473877 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473920 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.473979 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474020 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474033 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474047 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474059 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474072 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474084 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474099 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474112 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474125 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474139 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474152 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474165 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474179 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474193 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474205 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474218 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474230 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474243 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474255 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474270 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474303 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474327 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474346 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474365 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474392 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474408 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474423 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474441 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474456 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474470 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474484 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474496 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474508 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.474523 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476561 4904 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476591 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476607 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476620 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476634 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476647 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476725 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476738 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476751 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476765 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476778 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476792 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476806 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476820 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476831 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476851 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476892 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.476978 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477002 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477019 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477037 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477066 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477080 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477092 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477106 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477121 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477135 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477149 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477165 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477182 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477215 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477228 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477241 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477257 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477275 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477293 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477308 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477323 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477337 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477349 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477362 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477374 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477388 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477408 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477422 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477437 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477450 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477465 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477482 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477512 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477536 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477554 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477572 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477590 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477608 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477626 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477646 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477696 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477712 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477730 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477748 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477762 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477776 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477790 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477803 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477816 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477829 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477842 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477854 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477866 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477878 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477891 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477945 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477958 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477972 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477984 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.477999 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478013 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478027 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478038 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478050 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478062 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478078 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478091 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478103 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478116 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478129 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478142 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478156 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478168 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478180 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478192 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478205 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478217 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478229 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478242 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478254 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478266 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478278 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478291 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478304 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478318 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478331 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478343 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478356 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478370 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478383 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478396 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478410 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478422 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478435 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478447 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478461 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478474 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478492 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478505 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478518 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478530 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478544 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478557 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478572 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478592 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478608 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478621 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478635 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478650 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478689 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478702 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478715 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478726 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478739 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478751 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478763 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478775 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478882 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478898 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478913 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478927 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478941 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.478987 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479002 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479015 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479028 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479041 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479053 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479066 4904 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479083 4904 reconstruct.go:97] "Volume reconstruction finished" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.479092 4904 reconciler.go:26] "Reconciler: start to sync state" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.502708 4904 manager.go:324] Recovery completed Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.507758 4904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.510524 4904 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.511688 4904 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.511804 4904 kubelet.go:2335] "Starting kubelet main sync loop" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.511854 4904 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.515034 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.515183 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.515328 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.516407 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.516500 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.516584 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.517340 4904 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.517441 4904 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.517548 4904 state_mem.go:36] "Initialized new in-memory state store" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.536151 4904 policy_none.go:49] "None policy: Start" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.537454 4904 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.537488 4904 state_mem.go:35] "Initializing new in-memory state store" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.557957 4904 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.603682 4904 manager.go:334] "Starting Device Plugin manager" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.603730 4904 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.603746 4904 server.go:79] "Starting device plugin registration server" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.604117 4904 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.604148 4904 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.604263 4904 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.604383 4904 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.604399 4904 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.611909 4904 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.611983 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613192 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613293 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.613994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.614371 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.614413 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.614422 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.614501 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.614529 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615468 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615565 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615536 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615691 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615709 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615805 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.615835 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616372 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616411 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616459 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616478 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616617 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616724 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.616755 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617485 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617669 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617678 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617690 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.617722 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.618209 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.618231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.618241 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.619296 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.658250 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="400ms" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681044 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681099 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681135 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681163 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681191 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681223 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681253 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681288 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681317 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681346 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681374 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681427 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681453 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.681480 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.705290 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.706703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.706760 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.706770 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.706790 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.707315 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782667 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782708 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782731 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782749 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782799 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782815 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782835 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782851 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782865 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782878 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782892 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782970 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783014 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783003 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782989 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783068 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782965 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783043 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783021 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.782967 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783136 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783141 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783174 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783235 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783259 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783297 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783378 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.783444 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.908258 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.909320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.909357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.909373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.909400 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:06 crc kubenswrapper[4904]: E1121 13:32:06.909800 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.945173 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.951240 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.963253 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.980961 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: I1121 13:32:06.984584 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.994120 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-c19bfb8e247f57968fd3aa8b5a61893f7c91d65e8637d209f674c808305dcee2 WatchSource:0}: Error finding container c19bfb8e247f57968fd3aa8b5a61893f7c91d65e8637d209f674c808305dcee2: Status 404 returned error can't find the container with id c19bfb8e247f57968fd3aa8b5a61893f7c91d65e8637d209f674c808305dcee2 Nov 21 13:32:06 crc kubenswrapper[4904]: W1121 13:32:06.995316 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dd8c2967629b4da95dd31393d8111fb0e030fa5e23197fda367faf345550b31e WatchSource:0}: Error finding container dd8c2967629b4da95dd31393d8111fb0e030fa5e23197fda367faf345550b31e: Status 404 returned error can't find the container with id dd8c2967629b4da95dd31393d8111fb0e030fa5e23197fda367faf345550b31e Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.000157 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8840c07d94f5792376def48991965f1845af5c02fb593c3f55617bd766a4c22f WatchSource:0}: Error finding container 8840c07d94f5792376def48991965f1845af5c02fb593c3f55617bd766a4c22f: Status 404 returned error can't find the container with id 8840c07d94f5792376def48991965f1845af5c02fb593c3f55617bd766a4c22f Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.007716 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a9228f78aa648916440d5892b80dcb60b2065cd9a0004f417f287adb457e765d WatchSource:0}: Error finding container a9228f78aa648916440d5892b80dcb60b2065cd9a0004f417f287adb457e765d: Status 404 returned error can't find the container with id a9228f78aa648916440d5892b80dcb60b2065cd9a0004f417f287adb457e765d Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.009029 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-17c41451c062f91235e4c84aa820f83c3eb77337f4bdbc2f72a3976b04c1cd26 WatchSource:0}: Error finding container 17c41451c062f91235e4c84aa820f83c3eb77337f4bdbc2f72a3976b04c1cd26: Status 404 returned error can't find the container with id 17c41451c062f91235e4c84aa820f83c3eb77337f4bdbc2f72a3976b04c1cd26 Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.059024 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="800ms" Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.310246 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.311199 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.311224 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.311236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.311259 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.311751 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.456143 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.515801 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"17c41451c062f91235e4c84aa820f83c3eb77337f4bdbc2f72a3976b04c1cd26"} Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.517927 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a9228f78aa648916440d5892b80dcb60b2065cd9a0004f417f287adb457e765d"} Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.518898 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8840c07d94f5792376def48991965f1845af5c02fb593c3f55617bd766a4c22f"} Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.519701 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dd8c2967629b4da95dd31393d8111fb0e030fa5e23197fda367faf345550b31e"} Nov 21 13:32:07 crc kubenswrapper[4904]: I1121 13:32:07.520614 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c19bfb8e247f57968fd3aa8b5a61893f7c91d65e8637d209f674c808305dcee2"} Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.536866 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.536939 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.723893 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.724423 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.785276 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.785538 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.860175 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="1.6s" Nov 21 13:32:07 crc kubenswrapper[4904]: W1121 13:32:07.863800 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:07 crc kubenswrapper[4904]: E1121 13:32:07.863866 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.112468 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.114030 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.114058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.114072 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.114102 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:08 crc kubenswrapper[4904]: E1121 13:32:08.114583 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.456112 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.524956 4904 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d2025cc01f592f48daaa464e03d0255c574290883d982e98de7ccb0a80d84e18" exitCode=0 Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.525029 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d2025cc01f592f48daaa464e03d0255c574290883d982e98de7ccb0a80d84e18"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.525060 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.527664 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.527706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.527715 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.528895 4904 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cbed200e0db2350c399be7d45cdbc50afafe154fbe9b14bb29affaa78a7719c9" exitCode=0 Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.528949 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cbed200e0db2350c399be7d45cdbc50afafe154fbe9b14bb29affaa78a7719c9"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.529062 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.529670 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.529693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.529705 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.532880 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.532904 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.532913 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.534489 4904 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7" exitCode=0 Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.534553 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.534573 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.535527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.535547 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.535557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.535988 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b" exitCode=0 Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.536011 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b"} Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.536109 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.536855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.536878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.536887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.539738 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.541031 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.541065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:08 crc kubenswrapper[4904]: I1121 13:32:08.541076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4904]: E1121 13:32:09.281396 4904 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.138:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a08d7ab4984c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-21 13:32:06.453683392 +0000 UTC m=+0.575215944,LastTimestamp:2025-11-21 13:32:06.453683392 +0000 UTC m=+0.575215944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.455858 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:09 crc kubenswrapper[4904]: E1121 13:32:09.461343 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.138:6443: connect: connection refused" interval="3.2s" Nov 21 13:32:09 crc kubenswrapper[4904]: W1121 13:32:09.509205 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:09 crc kubenswrapper[4904]: E1121 13:32:09.509324 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.544241 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.544380 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.546186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.546223 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.546235 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.549178 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.549215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.551346 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.551373 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.554163 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"246881532f6f0540cde9ad7a3f020d0fd916146c008899e3b389c8a1aa34dbe0"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.554199 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.555028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.555057 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.555068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.557165 4904 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9fc6e6214811f3940f91cbbd75dfa881e1cce41f4a1bdf876aceca842cde4994" exitCode=0 Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.557192 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9fc6e6214811f3940f91cbbd75dfa881e1cce41f4a1bdf876aceca842cde4994"} Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.557255 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.558132 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.558162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.558172 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.715474 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.722551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.722586 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.722596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:09 crc kubenswrapper[4904]: I1121 13:32:09.722620 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:09 crc kubenswrapper[4904]: E1121 13:32:09.723017 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.138:6443: connect: connection refused" node="crc" Nov 21 13:32:09 crc kubenswrapper[4904]: W1121 13:32:09.990581 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:09 crc kubenswrapper[4904]: E1121 13:32:09.990689 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:10 crc kubenswrapper[4904]: W1121 13:32:10.334902 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:10 crc kubenswrapper[4904]: E1121 13:32:10.334978 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.456545 4904 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.563419 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa"} Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.563465 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.564538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.564570 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.564580 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.567592 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11"} Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.567668 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294"} Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.567689 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e"} Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.570388 4904 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ba9ab8b7f02cd0e07d6f3e33b132de3b6318dc1a33d31ebab969b6f0c59d89ab" exitCode=0 Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.570420 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ba9ab8b7f02cd0e07d6f3e33b132de3b6318dc1a33d31ebab969b6f0c59d89ab"} Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.570502 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.570543 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.570546 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.571505 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.571548 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.571561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.572217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.572242 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.572254 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.573056 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.573080 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:10 crc kubenswrapper[4904]: I1121 13:32:10.573093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:10 crc kubenswrapper[4904]: W1121 13:32:10.992776 4904 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.138:6443: connect: connection refused Nov 21 13:32:10 crc kubenswrapper[4904]: E1121 13:32:10.992910 4904 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.138:6443: connect: connection refused" logger="UnhandledError" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.576318 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e917305d968379a7aef26d66a97827ab5d18d72ffaa322cd5c7775ccc43514f9"} Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.576361 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.576421 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.576434 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.576367 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f5ec140d99cadab514f017998d99613211866cf4549be3bd65b9a750e2231028"} Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.576497 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e14b4d4a83bf404d88bbe9c05c1c7aaa784e51e4398af646eb88cbeab433ea3c"} Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.577340 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.577358 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.577367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.577898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.577909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.577916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.767989 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.768686 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.770685 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.770750 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.770766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.776822 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:11 crc kubenswrapper[4904]: I1121 13:32:11.851218 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.586549 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.586542 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65a9380aa33af3709989cedab0768db9be9ba4489ee5ee96fa485540252bb201"} Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.586783 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.586800 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.586885 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.586747 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89f541cf5f16c370199597c526b0ff3917b07fc8bf93165506f100058760b62c"} Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.588251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.588317 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.588334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.588448 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.588492 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.588508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.589138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.589210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.589233 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.829616 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.923578 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.924805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.924841 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.924853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:12 crc kubenswrapper[4904]: I1121 13:32:12.924877 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.011595 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.514042 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.587982 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.588008 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.588050 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.588066 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.588080 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589187 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589229 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589238 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589200 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:13 crc kubenswrapper[4904]: I1121 13:32:13.589409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.590531 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.590587 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.590689 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.591717 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.591771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.591785 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.592120 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.592186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:14 crc kubenswrapper[4904]: I1121 13:32:14.592207 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:15 crc kubenswrapper[4904]: I1121 13:32:15.033792 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:15 crc kubenswrapper[4904]: I1121 13:32:15.593026 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:15 crc kubenswrapper[4904]: I1121 13:32:15.594008 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:15 crc kubenswrapper[4904]: I1121 13:32:15.594030 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:15 crc kubenswrapper[4904]: I1121 13:32:15.594041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.012385 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.012453 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.226508 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.226692 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.227986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.228025 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.228034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:16 crc kubenswrapper[4904]: E1121 13:32:16.620194 4904 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.669771 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.669930 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.671162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.671197 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:16 crc kubenswrapper[4904]: I1121 13:32:16.671206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.321454 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.321803 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.323850 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.323896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.323907 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.326994 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.602858 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.604311 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.604363 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:18 crc kubenswrapper[4904]: I1121 13:32:18.604377 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:20 crc kubenswrapper[4904]: I1121 13:32:20.883803 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 21 13:32:20 crc kubenswrapper[4904]: I1121 13:32:20.884052 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:20 crc kubenswrapper[4904]: I1121 13:32:20.885377 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:20 crc kubenswrapper[4904]: I1121 13:32:20.885421 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:20 crc kubenswrapper[4904]: I1121 13:32:20.885430 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:20 crc kubenswrapper[4904]: I1121 13:32:20.953561 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.346013 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.346076 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.350003 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.350078 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.610024 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.610983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.611045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.611060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:21 crc kubenswrapper[4904]: I1121 13:32:21.624647 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 21 13:32:22 crc kubenswrapper[4904]: I1121 13:32:22.613317 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:22 crc kubenswrapper[4904]: I1121 13:32:22.614896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:22 crc kubenswrapper[4904]: I1121 13:32:22.614946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:22 crc kubenswrapper[4904]: I1121 13:32:22.614961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.521332 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.521462 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.522534 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.522563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.522574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.526500 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.615302 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.616280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.616328 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:23 crc kubenswrapper[4904]: I1121 13:32:23.616343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.013053 4904 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.013193 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.333632 4904 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.337066 4904 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.337134 4904 trace.go:236] Trace[1956244785]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 13:32:15.754) (total time: 10582ms): Nov 21 13:32:26 crc kubenswrapper[4904]: Trace[1956244785]: ---"Objects listed" error: 10582ms (13:32:26.337) Nov 21 13:32:26 crc kubenswrapper[4904]: Trace[1956244785]: [10.582297199s] [10.582297199s] END Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.337163 4904 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.338360 4904 trace.go:236] Trace[1571954210]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Nov-2025 13:32:13.616) (total time: 12721ms): Nov 21 13:32:26 crc kubenswrapper[4904]: Trace[1571954210]: ---"Objects listed" error: 12721ms (13:32:26.338) Nov 21 13:32:26 crc kubenswrapper[4904]: Trace[1571954210]: [12.721868292s] [12.721868292s] END Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.338397 4904 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.339164 4904 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.340428 4904 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.344686 4904 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.376138 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35256->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.376196 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35256->192.168.126.11:17697: read: connection reset by peer" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.376253 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35268->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.376323 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35268->192.168.126.11:17697: read: connection reset by peer" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.376772 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.376870 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.377586 4904 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.378368 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.454344 4904 apiserver.go:52] "Watching apiserver" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.458374 4904 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.458860 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.459402 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.459538 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.459578 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.459558 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.459631 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.459671 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.459684 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.459801 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.459836 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.461166 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.461685 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.463194 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.463761 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.463792 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.463817 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.463844 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.463851 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.464061 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.483166 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.494089 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.506275 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.523170 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.532157 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.542278 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.551343 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.558864 4904 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.563981 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.573829 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.582671 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.591313 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.601725 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.611967 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.622857 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.624502 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.626222 4904 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11" exitCode=255 Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.626257 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11"} Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.646885 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.646990 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647030 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647063 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647096 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647128 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647159 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647192 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647226 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647302 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647329 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647357 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647388 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647520 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647644 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647873 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648001 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648124 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648173 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648259 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648287 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648371 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648429 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648483 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648499 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.648721 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.649348 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.647702 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.649772 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.650582 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.650693 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.650739 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.650778 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.651443 4904 scope.go:117] "RemoveContainer" containerID="928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.651514 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.651590 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.654851 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.656299 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657276 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657368 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657461 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657488 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657513 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657544 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657575 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657602 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657631 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657672 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657698 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657723 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657752 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657779 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657805 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657832 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657856 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657883 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657926 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.657996 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658023 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658055 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658080 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658104 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658130 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658156 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658182 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658207 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658233 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658258 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.658855 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.659160 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.659416 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.659649 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.659941 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.660585 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.660883 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.661112 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.661268 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.661224 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.661641 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.661837 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662022 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662074 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662110 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662142 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662168 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662211 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662239 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662266 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662277 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662291 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662321 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662350 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662378 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662403 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662428 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662454 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662478 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662504 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662530 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662542 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662554 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662581 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662606 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662633 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662695 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662714 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662743 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662773 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662800 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662827 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662851 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662875 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662899 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662917 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662930 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662955 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.662981 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663008 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663033 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663058 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663086 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663078 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663114 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663143 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663170 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663188 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663198 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663223 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663250 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663277 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663302 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663328 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663356 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663383 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663394 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663408 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663435 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663460 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663485 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663510 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663537 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663560 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663586 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663611 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663636 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663678 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663709 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663733 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663759 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663787 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663814 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663871 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663901 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663924 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663948 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663971 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.663994 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664018 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664041 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664025 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664066 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664092 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664117 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664172 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664197 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664222 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664246 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664254 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664274 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664298 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664324 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.664355 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:27.164331055 +0000 UTC m=+21.285863687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664385 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664418 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664444 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664474 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664495 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664525 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664547 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664564 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664583 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664605 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664614 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664628 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664633 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664673 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664806 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664913 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664945 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.664984 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665003 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665015 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665068 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665106 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665106 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665146 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665168 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665186 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665206 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665226 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665247 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665267 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665287 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665313 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665331 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665351 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665370 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665387 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665404 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665420 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665437 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665453 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665468 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665484 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665500 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665516 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665532 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665550 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665565 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665581 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665598 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665615 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665632 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665741 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.666029 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.666332 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.666348 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.666405 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.666564 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.666785 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.667441 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.667785 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.667813 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669006 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669279 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669546 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669554 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669817 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.669820 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670083 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670103 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670269 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670505 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670529 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670685 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.670981 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.671670 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.671713 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.671781 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.672199 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.672458 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.672572 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.672848 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.672997 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.665648 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673161 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673187 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673214 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673239 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673260 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673288 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673312 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673329 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673339 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673344 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673397 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673423 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673446 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673464 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673485 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673505 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673521 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673539 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673556 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673573 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673615 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673640 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673677 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673697 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673718 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673740 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673776 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673796 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673814 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673834 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673854 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673889 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.673988 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674001 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674012 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674024 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674036 4904 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674046 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674057 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674067 4904 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674078 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674089 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674101 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674112 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674123 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674134 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674146 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674158 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674169 4904 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674179 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674190 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674200 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674210 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674220 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674230 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674239 4904 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674247 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674256 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674265 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674274 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674284 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674293 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674301 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674311 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674327 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674337 4904 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674347 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674357 4904 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674366 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674375 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674386 4904 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674395 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674403 4904 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674412 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674420 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674430 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674439 4904 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674449 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674458 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674466 4904 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674475 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674485 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674493 4904 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674511 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674520 4904 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674529 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674538 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674547 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674556 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674564 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674574 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674582 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674592 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674601 4904 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674613 4904 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674622 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674635 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674644 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674746 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675162 4904 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675177 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675187 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675198 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675207 4904 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675217 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675226 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675235 4904 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675245 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675254 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674308 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.674765 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675081 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675328 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675612 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.675670 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.676049 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.676314 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.676916 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.677221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.677492 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.677764 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.678219 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.678463 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.678869 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.680047 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.680316 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.680552 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.680555 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.681616 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.681820 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.681965 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.682018 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.682389 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.682701 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.683041 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.683253 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.683681 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.684062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.684511 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.684739 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.684892 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.685079 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:27.18504951 +0000 UTC m=+21.306582072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.685092 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.685340 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.685414 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.685613 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.685628 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.685886 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:27.185542433 +0000 UTC m=+21.307074985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.686037 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.686872 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.687611 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.687835 4904 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.687919 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.684897 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.685381 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.688637 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.689537 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.689737 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.690304 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.691237 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.691516 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.691638 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.691819 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.691871 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.691897 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.692094 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.692221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.692258 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.692702 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.692862 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.693091 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.693342 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.694271 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.694428 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.694599 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.694790 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.694849 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695010 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695016 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695146 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695550 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695727 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695811 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.695881 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696353 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696370 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696384 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696497 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696793 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696892 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.696962 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697304 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697350 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697420 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697443 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697333 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697727 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697771 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697682 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.697940 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.698296 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.698469 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.698593 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.698611 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.698616 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.698817 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699087 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699227 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699370 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699378 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699488 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699510 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699565 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.699717 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.699922 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.699957 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.699975 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.700050 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:27.200030915 +0000 UTC m=+21.321563467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.700212 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.700419 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.700493 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.701023 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.701102 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.701486 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.701824 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.701819 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.702359 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.702603 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.702923 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.702954 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.703178 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.703344 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.706405 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.707012 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.707215 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.707724 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.707871 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.707974 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:26 crc kubenswrapper[4904]: E1121 13:32:26.708162 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:27.208146804 +0000 UTC m=+21.329679426 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.708593 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.711820 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.713369 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.714938 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.716000 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.730201 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.730307 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.732428 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.740344 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.742464 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.743862 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.750833 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776237 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776299 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776396 4904 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776443 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776454 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776458 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776484 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776512 4904 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776524 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776536 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776546 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776555 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776565 4904 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776589 4904 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776599 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776608 4904 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776617 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776627 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776636 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776646 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776682 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776694 4904 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776704 4904 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776713 4904 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776723 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776733 4904 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776758 4904 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776768 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776784 4904 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776793 4904 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776802 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776812 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776836 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776849 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776859 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776868 4904 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776878 4904 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776886 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776933 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776942 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776953 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776963 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776973 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.776998 4904 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777008 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777034 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777043 4904 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777052 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777081 4904 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777091 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777099 4904 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777108 4904 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777120 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777129 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777156 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777165 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777176 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777186 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777196 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777208 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777233 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777242 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777253 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777264 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777273 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777306 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777317 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777327 4904 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777336 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777346 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777355 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777364 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777390 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777399 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777409 4904 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777419 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777429 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777438 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777463 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777473 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777482 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777492 4904 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777501 4904 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777511 4904 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777535 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777546 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777556 4904 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777565 4904 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777575 4904 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777585 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777596 4904 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777619 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777629 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777639 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777673 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777684 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777694 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777703 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777712 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777721 4904 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777744 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777772 4904 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777786 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777796 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777822 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777835 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777851 4904 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777871 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777884 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777926 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777939 4904 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777951 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777964 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.777996 4904 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.778009 4904 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.778105 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.778124 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.778137 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.778151 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.778186 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.781158 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 21 13:32:26 crc kubenswrapper[4904]: I1121 13:32:26.787723 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 21 13:32:26 crc kubenswrapper[4904]: W1121 13:32:26.804587 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-5603a9fc880a199ab331a85c1191638ae0e0f56f43229cb074109926e4dc57b4 WatchSource:0}: Error finding container 5603a9fc880a199ab331a85c1191638ae0e0f56f43229cb074109926e4dc57b4: Status 404 returned error can't find the container with id 5603a9fc880a199ab331a85c1191638ae0e0f56f43229cb074109926e4dc57b4 Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.073988 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 21 13:32:27 crc kubenswrapper[4904]: W1121 13:32:27.087068 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ca4faa66e2e40117f0c5f959f21af7eca26ac95f339f80dd3fa8093d828cd727 WatchSource:0}: Error finding container ca4faa66e2e40117f0c5f959f21af7eca26ac95f339f80dd3fa8093d828cd727: Status 404 returned error can't find the container with id ca4faa66e2e40117f0c5f959f21af7eca26ac95f339f80dd3fa8093d828cd727 Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.181943 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.182147 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:28.182125942 +0000 UTC m=+22.303658494 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.283334 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.283381 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.283402 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.283420 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283496 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283546 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:28.283531775 +0000 UTC m=+22.405064327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283611 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283670 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283686 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283751 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:28.283728779 +0000 UTC m=+22.405261431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283618 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283797 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283809 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283809 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283843 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:28.283833992 +0000 UTC m=+22.405366654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:27 crc kubenswrapper[4904]: E1121 13:32:27.283954 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:28.283928734 +0000 UTC m=+22.405461286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.630907 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.631233 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.631344 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5603a9fc880a199ab331a85c1191638ae0e0f56f43229cb074109926e4dc57b4"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.633770 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.633970 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a3a69278cca804c3128960307c316936ae0ec44dfed97e9eb33d1a48ae9b6b27"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.637047 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.639687 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.640729 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.642342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ca4faa66e2e40117f0c5f959f21af7eca26ac95f339f80dd3fa8093d828cd727"} Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.653547 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.676752 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.695424 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.710249 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.723030 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.736983 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.752100 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.765846 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.782083 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.794281 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.807611 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.831127 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.855722 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:27 crc kubenswrapper[4904]: I1121 13:32:27.868385 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.191332 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.191523 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:30.191492916 +0000 UTC m=+24.313025468 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.293464 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.294035 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.294306 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.293790 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.294502 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.294521 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.294159 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.294612 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.294628 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.294437 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.295049 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.295073 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.295141 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:30.295099472 +0000 UTC m=+24.416632214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.295162 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:30.295154163 +0000 UTC m=+24.416686955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.295202 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:30.295194884 +0000 UTC m=+24.416727686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.295214 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:30.295208085 +0000 UTC m=+24.416740877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.512583 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.512757 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.513157 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.513232 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.513383 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:28 crc kubenswrapper[4904]: E1121 13:32:28.513460 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.517075 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.518050 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.519227 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.520801 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.522308 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.523043 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.524003 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.525471 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.526348 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.527832 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.528462 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.529296 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.529833 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.530368 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.530973 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.531644 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.532413 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.532907 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.533586 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.534340 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.534943 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.535723 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.536229 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.538260 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.538810 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.540350 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.541492 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.542186 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.543457 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.544091 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.545335 4904 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.545473 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.547720 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.548850 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.549315 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.551277 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.553355 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.554014 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.555209 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.556127 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.557216 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.558143 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.558983 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.559813 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.560457 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.562484 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.563227 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.564695 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.565471 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.566641 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.567422 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.571940 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.572785 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 21 13:32:28 crc kubenswrapper[4904]: I1121 13:32:28.573325 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.650068 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8"} Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.677839 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.696298 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.714151 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.783948 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.819214 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.834698 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:29 crc kubenswrapper[4904]: I1121 13:32:29.850699 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:29Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.210748 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.211052 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:34.211004333 +0000 UTC m=+28.332536925 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.312412 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.312497 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.312558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.312620 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312820 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312854 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312854 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312891 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312914 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312927 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312935 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312824 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.312992 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:34.312951329 +0000 UTC m=+28.434483921 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.313059 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:34.313038641 +0000 UTC m=+28.434571233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.313082 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:34.313069402 +0000 UTC m=+28.434601984 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.313108 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:34.313093723 +0000 UTC m=+28.434626315 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.512220 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.512335 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.512449 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.512591 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:30 crc kubenswrapper[4904]: I1121 13:32:30.512808 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:30 crc kubenswrapper[4904]: E1121 13:32:30.512951 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.512102 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.512171 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.512271 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.512443 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.512563 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.512730 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.741458 4904 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.743725 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.743820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.743835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.743930 4904 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.772801 4904 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.773156 4904 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.774488 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.774561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.774591 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.774610 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.774620 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:32Z","lastTransitionTime":"2025-11-21T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.846490 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:32Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.853995 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.854047 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.854062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.854083 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.854096 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:32Z","lastTransitionTime":"2025-11-21T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.870877 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:32Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.875871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.875917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.875928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.875947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.875957 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:32Z","lastTransitionTime":"2025-11-21T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.889941 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:32Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.894173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.894226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.894238 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.894257 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.894270 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:32Z","lastTransitionTime":"2025-11-21T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.907778 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:32Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.911924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.911972 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.911981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.911997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.912007 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:32Z","lastTransitionTime":"2025-11-21T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.924544 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:32Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:32 crc kubenswrapper[4904]: E1121 13:32:32.924680 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.926664 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.926715 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.926728 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.926760 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:32 crc kubenswrapper[4904]: I1121 13:32:32.926782 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:32Z","lastTransitionTime":"2025-11-21T13:32:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.016814 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.020973 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.029762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.029795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.029807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.029823 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.029835 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.042096 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.056518 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.074827 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.082509 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.092699 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.103215 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4hpll"] Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.103554 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.103813 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xb8tn"] Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.104150 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.105134 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.105937 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.105955 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.106115 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.106406 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.106413 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.107625 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.108778 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.114868 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.132221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.132260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.132272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.132292 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.132307 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.134968 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.137371 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/96e1548b-c40d-450b-a2f1-51e56c467178-rootfs\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.137467 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwdw\" (UniqueName: \"kubernetes.io/projected/96e1548b-c40d-450b-a2f1-51e56c467178-kube-api-access-8hwdw\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.137507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96e1548b-c40d-450b-a2f1-51e56c467178-proxy-tls\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.137563 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96e1548b-c40d-450b-a2f1-51e56c467178-mcd-auth-proxy-config\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.137684 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/576c335c-cce7-461f-9308-546814064708-hosts-file\") pod \"node-resolver-4hpll\" (UID: \"576c335c-cce7-461f-9308-546814064708\") " pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.137765 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/576c335c-cce7-461f-9308-546814064708-kube-api-access-lq9hc\") pod \"node-resolver-4hpll\" (UID: \"576c335c-cce7-461f-9308-546814064708\") " pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.147984 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.170368 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.191068 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.210878 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.224356 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.234567 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.234616 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.234628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.234643 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.234679 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.238928 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/96e1548b-c40d-450b-a2f1-51e56c467178-rootfs\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwdw\" (UniqueName: \"kubernetes.io/projected/96e1548b-c40d-450b-a2f1-51e56c467178-kube-api-access-8hwdw\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239032 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96e1548b-c40d-450b-a2f1-51e56c467178-proxy-tls\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239056 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96e1548b-c40d-450b-a2f1-51e56c467178-mcd-auth-proxy-config\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239081 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/576c335c-cce7-461f-9308-546814064708-hosts-file\") pod \"node-resolver-4hpll\" (UID: \"576c335c-cce7-461f-9308-546814064708\") " pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239052 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/96e1548b-c40d-450b-a2f1-51e56c467178-rootfs\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239102 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/576c335c-cce7-461f-9308-546814064708-kube-api-access-lq9hc\") pod \"node-resolver-4hpll\" (UID: \"576c335c-cce7-461f-9308-546814064708\") " pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.239273 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/576c335c-cce7-461f-9308-546814064708-hosts-file\") pod \"node-resolver-4hpll\" (UID: \"576c335c-cce7-461f-9308-546814064708\") " pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.240245 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/96e1548b-c40d-450b-a2f1-51e56c467178-mcd-auth-proxy-config\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.241237 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.247006 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/96e1548b-c40d-450b-a2f1-51e56c467178-proxy-tls\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.257190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwdw\" (UniqueName: \"kubernetes.io/projected/96e1548b-c40d-450b-a2f1-51e56c467178-kube-api-access-8hwdw\") pod \"machine-config-daemon-xb8tn\" (UID: \"96e1548b-c40d-450b-a2f1-51e56c467178\") " pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.262121 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq9hc\" (UniqueName: \"kubernetes.io/projected/576c335c-cce7-461f-9308-546814064708-kube-api-access-lq9hc\") pod \"node-resolver-4hpll\" (UID: \"576c335c-cce7-461f-9308-546814064708\") " pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.264544 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.277378 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.290435 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.306425 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.324393 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.337521 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.337563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.337571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.337586 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.337595 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.420797 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4hpll" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.428868 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.446196 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.446249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.446264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.446289 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.446306 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.507927 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xgf6p"] Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.509199 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-txkm2"] Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.509364 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.512047 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.516718 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kgngm"] Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.518024 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.517674 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.518927 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.519234 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.519264 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.519572 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.524851 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525151 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525279 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525414 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525520 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525643 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525803 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.525853 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.527834 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.536521 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.552498 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.553269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.553293 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.553304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.553323 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.553336 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.584768 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.615329 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.630432 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642640 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-systemd\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642741 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-hostroot\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642764 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-conf-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642785 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-bin\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642807 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-system-cni-dir\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642826 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-cni-multus\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642844 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-log-socket\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642898 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-etc-kubernetes\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642944 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-var-lib-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.642975 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-etc-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643023 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-daemon-config\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643069 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-os-release\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643092 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-netns\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-os-release\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643154 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658bx\" (UniqueName: \"kubernetes.io/projected/349c3b8f-5311-4171-ade5-ce7db3d118ad-kube-api-access-658bx\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643180 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-cnibin\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/90d7f33d-b498-4549-8c92-9b614313b06f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643235 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovn-node-metrics-cert\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643273 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-cnibin\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643296 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-config\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643337 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643380 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-cni-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643412 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-socket-dir-parent\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643437 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrzc\" (UniqueName: \"kubernetes.io/projected/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-kube-api-access-8mrzc\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643462 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-netd\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643501 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/90d7f33d-b498-4549-8c92-9b614313b06f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643532 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-k8s-cni-cncf-io\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643565 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-kubelet\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643590 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-system-cni-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643745 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-kubelet\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643814 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-systemd-units\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643847 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643893 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-script-lib\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643967 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-multus-certs\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.643993 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-ovn\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644053 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-env-overrides\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644084 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsqn4\" (UniqueName: \"kubernetes.io/projected/90d7f33d-b498-4549-8c92-9b614313b06f-kube-api-access-rsqn4\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644108 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-slash\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644131 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-netns\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644155 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-cni-bin\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-node-log\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644261 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.644323 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-cni-binary-copy\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.651955 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.655423 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.655465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.655478 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.655496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.655507 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.665837 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4hpll" event={"ID":"576c335c-cce7-461f-9308-546814064708","Type":"ContainerStarted","Data":"315293738f219e1fecd83a216ed883f6cf3865723aded5878291ca21f67f717b"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.670471 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.670505 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"1b6d4d1757eac157f50cd68a0258ad1a2869cbe8cc9c31300e89e36da32f4182"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.674168 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: E1121 13:32:33.683419 4904 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.699059 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.728848 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.744804 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-hostroot\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745175 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-conf-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745196 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-bin\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745218 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-system-cni-dir\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-hostroot\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-cni-multus\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745304 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-log-socket\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745323 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-etc-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745348 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-daemon-config\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745347 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-conf-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745385 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-log-socket\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-etc-kubernetes\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745426 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-system-cni-dir\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745343 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-bin\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745448 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-var-lib-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-etc-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745478 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-var-lib-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745489 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-netns\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745506 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-etc-kubernetes\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745537 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-os-release\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745564 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-os-release\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745590 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-658bx\" (UniqueName: \"kubernetes.io/projected/349c3b8f-5311-4171-ade5-ce7db3d118ad-kube-api-access-658bx\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-cnibin\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745633 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/90d7f33d-b498-4549-8c92-9b614313b06f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745639 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-os-release\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745682 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovn-node-metrics-cert\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745719 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-cnibin\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745751 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745773 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-cni-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745794 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-socket-dir-parent\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745820 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-config\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745863 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/90d7f33d-b498-4549-8c92-9b614313b06f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745882 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-k8s-cni-cncf-io\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745900 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mrzc\" (UniqueName: \"kubernetes.io/projected/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-kube-api-access-8mrzc\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745905 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-os-release\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745916 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-netd\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745936 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-system-cni-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745950 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-kubelet\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745966 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-kubelet\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745985 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-script-lib\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746020 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-multus-certs\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-systemd-units\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746067 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746087 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsqn4\" (UniqueName: \"kubernetes.io/projected/90d7f33d-b498-4549-8c92-9b614313b06f-kube-api-access-rsqn4\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-daemon-config\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-slash\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746124 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-ovn\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746143 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-env-overrides\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746164 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-node-log\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746187 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746214 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-cni-binary-copy\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746239 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-netns\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746261 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-cni-bin\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746289 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-systemd\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746353 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-systemd\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746437 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-cni-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746477 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-multus-socket-dir-parent\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746605 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-systemd-units\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-kubelet\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746719 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-kubelet\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746757 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-multus-certs\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746759 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-system-cni-dir\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.745689 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-netns\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.746930 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-k8s-cni-cncf-io\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747057 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-netd\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747100 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-cnibin\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747151 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-cnibin\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747281 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-env-overrides\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747323 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-ovn-kubernetes\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747325 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-config\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747364 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-run-netns\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747391 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/90d7f33d-b498-4549-8c92-9b614313b06f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747412 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-cni-bin\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747451 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-node-log\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747482 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-slash\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-ovn\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747542 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-script-lib\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747569 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-host-var-lib-cni-multus\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747806 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-cni-binary-copy\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.747948 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/90d7f33d-b498-4549-8c92-9b614313b06f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.748047 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-openvswitch\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.748097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/90d7f33d-b498-4549-8c92-9b614313b06f-cni-binary-copy\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.750009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovn-node-metrics-cert\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.758441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.758473 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.758482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.758496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.758507 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.764301 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.771128 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsqn4\" (UniqueName: \"kubernetes.io/projected/90d7f33d-b498-4549-8c92-9b614313b06f-kube-api-access-rsqn4\") pod \"multus-additional-cni-plugins-xgf6p\" (UID: \"90d7f33d-b498-4549-8c92-9b614313b06f\") " pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.774007 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-658bx\" (UniqueName: \"kubernetes.io/projected/349c3b8f-5311-4171-ade5-ce7db3d118ad-kube-api-access-658bx\") pod \"ovnkube-node-txkm2\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.786749 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.787195 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mrzc\" (UniqueName: \"kubernetes.io/projected/190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a-kube-api-access-8mrzc\") pod \"multus-kgngm\" (UID: \"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\") " pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.809749 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.830415 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.840734 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" Nov 21 13:32:33 crc kubenswrapper[4904]: W1121 13:32:33.852624 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90d7f33d_b498_4549_8c92_9b614313b06f.slice/crio-21ea41933b0f0d90df90990f001797360b4fcc3c30bdd13c2e502040cf61b5a2 WatchSource:0}: Error finding container 21ea41933b0f0d90df90990f001797360b4fcc3c30bdd13c2e502040cf61b5a2: Status 404 returned error can't find the container with id 21ea41933b0f0d90df90990f001797360b4fcc3c30bdd13c2e502040cf61b5a2 Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.860309 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.860342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.860351 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.860367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.860376 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.863566 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kgngm" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.864456 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.867345 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:33 crc kubenswrapper[4904]: W1121 13:32:33.885045 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod349c3b8f_5311_4171_ade5_ce7db3d118ad.slice/crio-24d8c7eed4849f77f1d9f52cad612a92c215330a1faf9455737e9a369683c78c WatchSource:0}: Error finding container 24d8c7eed4849f77f1d9f52cad612a92c215330a1faf9455737e9a369683c78c: Status 404 returned error can't find the container with id 24d8c7eed4849f77f1d9f52cad612a92c215330a1faf9455737e9a369683c78c Nov 21 13:32:33 crc kubenswrapper[4904]: W1121 13:32:33.887960 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod190a4a47_76b8_4bbc_95f3_0f9c9c12fb1a.slice/crio-68fe5c6ef31363f8bf941da6b9c1b499501ba886dc6eb108d8ba156973145d3a WatchSource:0}: Error finding container 68fe5c6ef31363f8bf941da6b9c1b499501ba886dc6eb108d8ba156973145d3a: Status 404 returned error can't find the container with id 68fe5c6ef31363f8bf941da6b9c1b499501ba886dc6eb108d8ba156973145d3a Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.897324 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.929989 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.965149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.965188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.965198 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.965211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.965219 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:33Z","lastTransitionTime":"2025-11-21T13:32:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.966582 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.985548 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:33 crc kubenswrapper[4904]: I1121 13:32:33.997407 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:33Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.010961 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.024774 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.034916 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.047340 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.066943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.066983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.066994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.067008 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.067019 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.169783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.169842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.169855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.169878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.169891 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.251166 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.251493 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.251448242 +0000 UTC m=+36.372980804 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.272768 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.272812 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.272823 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.272839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.272850 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.352320 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.352383 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.352411 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.352453 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352523 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352548 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352549 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352584 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352596 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352608 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352688 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.35264637 +0000 UTC m=+36.474178922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352717 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.352705521 +0000 UTC m=+36.474238073 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352719 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352858 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.352832164 +0000 UTC m=+36.474364936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.352976 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.353023 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:42.353015709 +0000 UTC m=+36.474548251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.375983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.376032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.376042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.376060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.376073 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.478269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.478344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.478355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.478375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.478386 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.512604 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.512808 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.512902 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.512938 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.513028 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:34 crc kubenswrapper[4904]: E1121 13:32:34.513278 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.581459 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.581495 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.581503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.581517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.581527 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.675799 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4hpll" event={"ID":"576c335c-cce7-461f-9308-546814064708","Type":"ContainerStarted","Data":"8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.678201 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerStarted","Data":"e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.678232 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerStarted","Data":"68fe5c6ef31363f8bf941da6b9c1b499501ba886dc6eb108d8ba156973145d3a"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.680291 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695699 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" exitCode=0 Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695811 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695890 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695927 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.695854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"24d8c7eed4849f77f1d9f52cad612a92c215330a1faf9455737e9a369683c78c"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.699225 4904 generic.go:334] "Generic (PLEG): container finished" podID="90d7f33d-b498-4549-8c92-9b614313b06f" containerID="4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889" exitCode=0 Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.699318 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerDied","Data":"4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.699370 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerStarted","Data":"21ea41933b0f0d90df90990f001797360b4fcc3c30bdd13c2e502040cf61b5a2"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.707431 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.729345 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.745736 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.762923 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.801494 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.801820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.801861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.801874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.801891 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.801901 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.817821 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.836377 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.851323 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.867720 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.885339 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.900751 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.905923 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.905965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.905979 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.906003 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.906018 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:34Z","lastTransitionTime":"2025-11-21T13:32:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.915171 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.937349 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.955537 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.972625 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:34 crc kubenswrapper[4904]: I1121 13:32:34.991127 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.008739 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.008792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.008802 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.008825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.008836 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.008981 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.027190 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.040548 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.055033 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.073600 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.086566 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bf22h"] Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.086967 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.090041 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.090102 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.090050 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.090357 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.090790 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.111617 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.111667 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.111678 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.111693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.111708 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.116702 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.134028 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.151716 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.161030 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1d9e1501-945f-4229-baa9-b86acd98cb04-serviceca\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.161078 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2sg2\" (UniqueName: \"kubernetes.io/projected/1d9e1501-945f-4229-baa9-b86acd98cb04-kube-api-access-v2sg2\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.161159 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d9e1501-945f-4229-baa9-b86acd98cb04-host\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.165699 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.181495 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.197570 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.212985 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.214791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.214821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.214831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.214845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.214856 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.233311 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.252088 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.262905 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d9e1501-945f-4229-baa9-b86acd98cb04-host\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.262962 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1d9e1501-945f-4229-baa9-b86acd98cb04-serviceca\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.262988 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2sg2\" (UniqueName: \"kubernetes.io/projected/1d9e1501-945f-4229-baa9-b86acd98cb04-kube-api-access-v2sg2\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.263296 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d9e1501-945f-4229-baa9-b86acd98cb04-host\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.264388 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1d9e1501-945f-4229-baa9-b86acd98cb04-serviceca\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.271454 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.285736 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2sg2\" (UniqueName: \"kubernetes.io/projected/1d9e1501-945f-4229-baa9-b86acd98cb04-kube-api-access-v2sg2\") pod \"node-ca-bf22h\" (UID: \"1d9e1501-945f-4229-baa9-b86acd98cb04\") " pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.288770 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.302078 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.314013 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.318556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.318875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.318969 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.319060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.319195 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.329979 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.361109 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.375622 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.401287 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bf22h" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.402108 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.416726 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.430437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.430511 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.430526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.430555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.430568 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.534848 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.534911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.534932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.534959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.534988 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.637267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.637313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.637325 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.637346 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.637378 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.708902 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bf22h" event={"ID":"1d9e1501-945f-4229-baa9-b86acd98cb04","Type":"ContainerStarted","Data":"dcd2d05a3fc71ee074428be0db8d830659eaba9284bede3af25f56cf9c548f25"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.711972 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.711999 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.712010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.712021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.712031 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.714449 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerStarted","Data":"2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.734448 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.740526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.740562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.740574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.740589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.740599 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.751408 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.764171 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.774287 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.787524 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.801959 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.818400 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.846079 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.847952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.848082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.848180 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.848268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.848356 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.866413 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.909294 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.924624 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.938739 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.950955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.951024 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.951037 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.951056 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.951086 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:35Z","lastTransitionTime":"2025-11-21T13:32:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.953503 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:35 crc kubenswrapper[4904]: I1121 13:32:35.975730 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:35Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.052834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.053161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.053170 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.053184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.053193 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.156619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.156676 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.156686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.156704 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.156715 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.266139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.266219 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.266239 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.266255 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.266290 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.369176 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.369236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.369249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.369268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.369282 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.471515 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.471558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.471568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.471584 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.471595 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.512425 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.512493 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.512425 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:36 crc kubenswrapper[4904]: E1121 13:32:36.512638 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:36 crc kubenswrapper[4904]: E1121 13:32:36.512714 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:36 crc kubenswrapper[4904]: E1121 13:32:36.512772 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.529782 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.545110 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.561227 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.574820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.574871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.574881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.574901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.574913 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.574910 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.590899 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.606816 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.636532 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.651696 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.674092 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.677740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.677781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.677794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.677810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.677822 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.706577 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.719760 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bf22h" event={"ID":"1d9e1501-945f-4229-baa9-b86acd98cb04","Type":"ContainerStarted","Data":"6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.724071 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.728166 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.731870 4904 generic.go:334] "Generic (PLEG): container finished" podID="90d7f33d-b498-4549-8c92-9b614313b06f" containerID="2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529" exitCode=0 Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.731912 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerDied","Data":"2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.743167 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.763452 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.778223 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.779993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.780036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.780054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.780071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.780082 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.794020 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.818226 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.840680 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.855507 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.873100 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.883276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.883334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.883354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.883395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.883418 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.886207 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.898059 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.912162 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.928113 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.940393 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.955432 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.969380 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.980602 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.985989 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.986051 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.986070 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.986096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.986114 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:36Z","lastTransitionTime":"2025-11-21T13:32:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:36 crc kubenswrapper[4904]: I1121 13:32:36.995072 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.088935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.088972 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.088979 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.088993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.089002 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.191455 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.191487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.191495 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.191512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.191521 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.294521 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.294560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.294572 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.294588 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.294599 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.397925 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.398012 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.398036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.398065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.398087 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.500369 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.500404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.500413 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.500429 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.500437 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.602702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.602918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.603060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.603166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.603231 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.705543 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.705585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.705598 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.705615 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.705628 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.739544 4904 generic.go:334] "Generic (PLEG): container finished" podID="90d7f33d-b498-4549-8c92-9b614313b06f" containerID="20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2" exitCode=0 Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.740154 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerDied","Data":"20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.757461 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.771022 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.798507 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.814695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.814747 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.814766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.814790 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.814811 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.818798 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.838459 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.853821 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.869193 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.882449 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.894206 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.907626 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.916911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.916933 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.916942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.916955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.916965 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:37Z","lastTransitionTime":"2025-11-21T13:32:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.921563 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.934018 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.943687 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:37 crc kubenswrapper[4904]: I1121 13:32:37.956479 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:37Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.019210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.019252 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.019262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.019277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.019286 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.122330 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.122372 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.122381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.122395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.122405 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.224625 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.224678 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.224687 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.224701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.224710 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.327849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.327896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.327906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.327924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.327933 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.430566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.430604 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.430614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.430629 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.430637 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.512592 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.512635 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.512705 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:38 crc kubenswrapper[4904]: E1121 13:32:38.512797 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:38 crc kubenswrapper[4904]: E1121 13:32:38.512884 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:38 crc kubenswrapper[4904]: E1121 13:32:38.512944 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.532964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.533003 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.533012 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.533025 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.533036 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.635723 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.635800 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.635809 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.635822 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.635833 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.738197 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.738241 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.738249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.738268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.738280 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.749051 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.751677 4904 generic.go:334] "Generic (PLEG): container finished" podID="90d7f33d-b498-4549-8c92-9b614313b06f" containerID="66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b" exitCode=0 Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.751721 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerDied","Data":"66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.774143 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.800271 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.823253 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.837410 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.844028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.844088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.844106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.844134 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.844154 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.854345 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.868774 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.884905 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.902294 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.918049 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.948140 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.948193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.948210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.948231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.948247 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:38Z","lastTransitionTime":"2025-11-21T13:32:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.948779 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.964050 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.977273 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:38 crc kubenswrapper[4904]: I1121 13:32:38.993514 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:38Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.010600 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.051482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.051544 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.051556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.051577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.051592 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.155293 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.155379 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.155406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.155443 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.155472 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.258508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.258560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.258571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.258589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.258599 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.361112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.361169 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.361178 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.361198 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.361208 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.464117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.464181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.464202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.464231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.464253 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.567517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.567593 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.567615 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.567646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.567706 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.671510 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.671557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.671573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.671592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.671608 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.759876 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerStarted","Data":"06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.773899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.773938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.773950 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.773967 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.773981 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.776093 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.792465 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.812767 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.827418 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.842109 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.854278 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.868177 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.876791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.876830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.876840 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.876855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.876864 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.883046 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.896883 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.917060 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.932205 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.950336 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.963757 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.980086 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.980118 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.980129 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.980152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.980163 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:39Z","lastTransitionTime":"2025-11-21T13:32:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:39 crc kubenswrapper[4904]: I1121 13:32:39.982981 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:39Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.082415 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.082490 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.082510 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.082542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.082576 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.185092 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.185193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.185216 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.185755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.186048 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.295295 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.295346 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.295356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.295372 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.295383 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.398320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.398367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.398378 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.398393 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.398402 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.501412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.501503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.501525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.501561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.501581 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.512670 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.512641 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:40 crc kubenswrapper[4904]: E1121 13:32:40.512819 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:40 crc kubenswrapper[4904]: E1121 13:32:40.513075 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.513127 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:40 crc kubenswrapper[4904]: E1121 13:32:40.513321 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.604240 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.604300 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.604313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.604339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.604354 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.706577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.706621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.706632 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.706701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.706720 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.767087 4904 generic.go:334] "Generic (PLEG): container finished" podID="90d7f33d-b498-4549-8c92-9b614313b06f" containerID="06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0" exitCode=0 Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.767179 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerDied","Data":"06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.779885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.780429 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.784611 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.800796 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.812196 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.812249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.812262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.812287 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.812300 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.815790 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.817672 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.837422 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.855066 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.871202 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.888957 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.906491 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.915384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.915440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.915458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.915481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.915494 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:40Z","lastTransitionTime":"2025-11-21T13:32:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.927564 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.974252 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:40 crc kubenswrapper[4904]: I1121 13:32:40.991796 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:40Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.017895 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.019880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.019924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.019933 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.019947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.019956 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.034338 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.048395 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.061946 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.074683 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.097432 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.110577 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.123260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.123299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.123309 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.123323 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.123333 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.127953 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.138418 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.188122 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.215131 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.227807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.227863 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.227877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.227902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.227920 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.241235 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.255933 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.268247 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.279724 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.292806 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.306708 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.330828 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.330870 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.330885 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.330908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.330925 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.434052 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.434096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.434108 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.434127 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.434141 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.538030 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.538065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.538073 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.538087 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.538097 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.640787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.640818 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.640827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.640839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.640850 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.743502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.743552 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.743562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.743582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.743592 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.789080 4904 generic.go:334] "Generic (PLEG): container finished" podID="90d7f33d-b498-4549-8c92-9b614313b06f" containerID="a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659" exitCode=0 Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.789144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerDied","Data":"a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.789286 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.789896 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.812926 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.832156 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.832299 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.846921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.846987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.847011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.847041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.847062 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.853528 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.877058 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.899632 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.920986 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.940922 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.951184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.951283 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.951330 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.951368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.951401 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:41Z","lastTransitionTime":"2025-11-21T13:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.958696 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.974289 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:41 crc kubenswrapper[4904]: I1121 13:32:41.995619 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:41Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.016068 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.031370 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.051549 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.056795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.056831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.056841 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.056856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.056866 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.070094 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.083500 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.097705 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.112748 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.128991 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.143317 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.154985 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.159156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.159203 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.159214 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.159233 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.159246 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.168498 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.182969 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.195578 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.209232 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.230269 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.245243 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.260882 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.261779 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.261819 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.261829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.261851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.261862 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.273154 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.348312 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.348603 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:32:58.348565234 +0000 UTC m=+52.470097816 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.364402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.364441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.364450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.364464 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.364472 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.449600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.449705 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.449754 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.449799 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449815 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449843 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449862 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449919 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449932 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:58.449909945 +0000 UTC m=+52.571442527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449964 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.450024 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.450038 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.449972 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:58.449955517 +0000 UTC m=+52.571488109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.450078 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.450127 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:58.45010345 +0000 UTC m=+52.571636102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.450276 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:32:58.450246703 +0000 UTC m=+52.571779275 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.467671 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.467728 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.467741 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.467759 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.467770 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.512583 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.512622 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.512689 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.512722 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.512796 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.512864 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.570557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.570595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.570604 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.570621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.570631 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.674029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.674073 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.674085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.674101 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.674113 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.778607 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.778716 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.778736 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.778766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.778785 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.798045 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.798943 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" event={"ID":"90d7f33d-b498-4549-8c92-9b614313b06f","Type":"ContainerStarted","Data":"8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.820940 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.844858 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.863764 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.881828 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.881918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.881947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.881986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.882016 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.886628 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.903507 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.920511 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.934830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.934885 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.934902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.934930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.934949 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.941962 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.952457 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.955503 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.959561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.959602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.959614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.959638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.959678 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.972959 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.976739 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.977975 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.978028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.978044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.978068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.978084 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.990646 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: E1121 13:32:42.992009 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:42Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.997941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.997999 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.998019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.998049 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:42 crc kubenswrapper[4904]: I1121 13:32:42.998068 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:42Z","lastTransitionTime":"2025-11-21T13:32:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.005525 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:43Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:43 crc kubenswrapper[4904]: E1121 13:32:43.019090 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:43Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.021270 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:43Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.024512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.024570 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.024588 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.024621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.024639 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: E1121 13:32:43.041879 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:43Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:43 crc kubenswrapper[4904]: E1121 13:32:43.042080 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.042531 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:43Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.044439 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.044500 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.044517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.044546 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.044564 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.063358 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:43Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.148289 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.148341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.148357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.148374 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.148385 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.251818 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.251875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.251921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.251943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.251961 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.355488 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.355535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.355544 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.355568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.355579 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.459067 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.459215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.459250 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.459297 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.459326 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.563147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.563268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.563288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.563322 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.563346 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.668097 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.668176 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.668200 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.668239 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.668265 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.774355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.774425 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.774444 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.774476 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.774498 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.807314 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.877799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.877851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.877864 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.877887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.877902 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.980619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.980688 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.980699 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.980718 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:43 crc kubenswrapper[4904]: I1121 13:32:43.980735 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:43Z","lastTransitionTime":"2025-11-21T13:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.084165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.084719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.084739 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.084763 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.084781 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.187690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.187747 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.187762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.187785 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.187796 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.291033 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.291073 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.291083 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.291099 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.291109 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.394798 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.394845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.394854 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.394871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.394881 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.499605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.499695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.499719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.499744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.499759 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.512435 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:44 crc kubenswrapper[4904]: E1121 13:32:44.512565 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.513110 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.513131 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:44 crc kubenswrapper[4904]: E1121 13:32:44.513322 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:44 crc kubenswrapper[4904]: E1121 13:32:44.513576 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.604277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.604370 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.604396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.604429 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.604451 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.708354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.708390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.708401 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.708416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.708425 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.810416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.810458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.810467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.810484 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.810496 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.813101 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/0.log" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.816476 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26" exitCode=1 Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.816525 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.817245 4904 scope.go:117] "RemoveContainer" containerID="ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.836853 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.854221 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.870471 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.885720 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.898341 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.913573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.913833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.913896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.913966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.914029 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:44Z","lastTransitionTime":"2025-11-21T13:32:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.914615 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.928949 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.944766 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.959552 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.969619 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.981304 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:44 crc kubenswrapper[4904]: I1121 13:32:44.996385 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.011494 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.017043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.017098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.017112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.017141 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.017153 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.034601 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.038041 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.052329 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.074607 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.087692 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.099714 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.111577 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.119582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.119610 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.119619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.119636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.119646 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.124923 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.137425 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.149004 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.161708 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.174760 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.187950 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.203408 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.223085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.223204 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.223221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.223250 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.223268 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.226595 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.240980 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:45Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.326605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.326686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.326703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.326723 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.326738 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.430713 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.430778 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.430793 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.430817 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.430831 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.533553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.533601 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.533611 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.533628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.533642 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.636409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.636477 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.636491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.636514 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.636526 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.739725 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.739847 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.739906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.739942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.740002 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.823348 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/0.log" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.827782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.842511 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.842565 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.842575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.842592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.842603 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.946522 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.946636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.946672 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.946716 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:45 crc kubenswrapper[4904]: I1121 13:32:45.946732 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:45Z","lastTransitionTime":"2025-11-21T13:32:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.050929 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.050997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.051015 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.051044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.051065 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.154071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.154122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.154130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.154147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.154157 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.261083 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.261163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.261177 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.261201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.261226 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.364354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.364406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.364417 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.364434 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.364445 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.473417 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.473504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.473521 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.473551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.473570 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.512387 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.512466 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:46 crc kubenswrapper[4904]: E1121 13:32:46.512616 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.512633 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:46 crc kubenswrapper[4904]: E1121 13:32:46.512830 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:46 crc kubenswrapper[4904]: E1121 13:32:46.513044 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.537705 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.551407 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.571792 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.581585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.581640 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.581676 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.581702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.581719 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.598918 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.617260 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.639404 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.657968 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.706017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.706082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.706105 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.706146 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.706171 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.706200 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.739082 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.759145 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.782284 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.799001 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.808287 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.808327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.808339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.808355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.808365 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.817586 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.824914 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984"] Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.825431 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.831051 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.832885 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.833175 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.833784 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.846502 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.862920 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.878341 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.894317 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.908167 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.908332 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89k5g\" (UniqueName: \"kubernetes.io/projected/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-kube-api-access-89k5g\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.908419 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.908452 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.910342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.910387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.910399 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.910417 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.910429 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:46Z","lastTransitionTime":"2025-11-21T13:32:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.913108 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.932605 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.951627 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.964556 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.982400 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:46 crc kubenswrapper[4904]: I1121 13:32:46.999007 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.009871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.009932 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89k5g\" (UniqueName: \"kubernetes.io/projected/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-kube-api-access-89k5g\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.009958 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.009976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.010699 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.010780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.013011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.013045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.013057 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.013077 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.013090 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.029969 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.030054 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89k5g\" (UniqueName: \"kubernetes.io/projected/1b6b684a-caa5-43e4-aba0-e64fd2090b7a-kube-api-access-89k5g\") pod \"ovnkube-control-plane-749d76644c-pr984\" (UID: \"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.034157 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.057229 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.085574 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.098574 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.110285 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.115062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.115090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.115098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.115111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.115119 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.139824 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.217255 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.217301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.217313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.217329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.217341 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.327285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.327611 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.327620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.327637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.327647 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.431062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.431107 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.431115 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.431132 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.431142 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.533995 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.534046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.534061 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.534082 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.534095 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.579450 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mx57c"] Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.580130 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:47 crc kubenswrapper[4904]: E1121 13:32:47.580223 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.595463 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.606600 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.620891 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.634180 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.636371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.636421 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.636433 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.636451 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.636463 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.651031 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.662235 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.677529 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.695016 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.708584 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.718002 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.718112 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47gr\" (UniqueName: \"kubernetes.io/projected/7c038482-babe-44bf-a8ff-89415347e81f-kube-api-access-z47gr\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.725870 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.745787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.745839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.745850 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.745872 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.745884 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.746576 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.757859 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.770048 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.782974 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.802379 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.819759 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.819901 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z47gr\" (UniqueName: \"kubernetes.io/projected/7c038482-babe-44bf-a8ff-89415347e81f-kube-api-access-z47gr\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:47 crc kubenswrapper[4904]: E1121 13:32:47.819993 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:47 crc kubenswrapper[4904]: E1121 13:32:47.820117 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:32:48.320089691 +0000 UTC m=+42.441622443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.823403 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.837636 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/1.log" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.839936 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/0.log" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.841042 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z47gr\" (UniqueName: \"kubernetes.io/projected/7c038482-babe-44bf-a8ff-89415347e81f-kube-api-access-z47gr\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.844633 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957" exitCode=1 Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.844707 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.845034 4904 scope.go:117] "RemoveContainer" containerID="ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.846564 4904 scope.go:117] "RemoveContainer" containerID="fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957" Nov 21 13:32:47 crc kubenswrapper[4904]: E1121 13:32:47.846984 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.848341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.848419 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.848441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.848501 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.848529 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.850336 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" event={"ID":"1b6b684a-caa5-43e4-aba0-e64fd2090b7a","Type":"ContainerStarted","Data":"7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.850433 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" event={"ID":"1b6b684a-caa5-43e4-aba0-e64fd2090b7a","Type":"ContainerStarted","Data":"aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.850465 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" event={"ID":"1b6b684a-caa5-43e4-aba0-e64fd2090b7a","Type":"ContainerStarted","Data":"e790c2ed76beecb0d91460f1d2f582bd5e3a062584f396b7d0f53b24ec962012"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.861341 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.877878 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.889809 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.903503 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.929519 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.944358 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.952384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.952813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.952931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.953074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.953168 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:47Z","lastTransitionTime":"2025-11-21T13:32:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.958966 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.970109 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.982297 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:47 crc kubenswrapper[4904]: I1121 13:32:47.993170 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:47Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.005230 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.017051 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.031225 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.041945 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.052857 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.057232 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.057290 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.057307 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.057334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.057350 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.069416 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.084778 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.098258 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.116443 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.127752 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.153677 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.160412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.160460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.160477 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.160503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.160521 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.167475 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.182940 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.200109 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.217911 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.233849 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.254926 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.262607 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.262639 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.262650 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.262687 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.262698 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.267294 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.280280 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.294588 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.307277 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.323324 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:48Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.325547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:48 crc kubenswrapper[4904]: E1121 13:32:48.325736 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:48 crc kubenswrapper[4904]: E1121 13:32:48.325791 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:32:49.325779014 +0000 UTC m=+43.447311566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.365269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.365299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.365307 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.365321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.365330 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.467384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.467430 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.467442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.467461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.467474 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.512484 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:48 crc kubenswrapper[4904]: E1121 13:32:48.512634 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.512981 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.513083 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:48 crc kubenswrapper[4904]: E1121 13:32:48.513191 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:48 crc kubenswrapper[4904]: E1121 13:32:48.513232 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.570340 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.570385 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.570397 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.570445 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.570459 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.673718 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.673769 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.673779 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.673798 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.673808 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.776215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.776538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.776622 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.776716 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.776794 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.855491 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/1.log" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.878992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.879034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.879044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.879058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.879070 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.981569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.981621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.981631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.981668 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:48 crc kubenswrapper[4904]: I1121 13:32:48.981680 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:48Z","lastTransitionTime":"2025-11-21T13:32:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.084117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.084153 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.084161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.084174 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.084184 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.187712 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.187798 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.187819 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.187856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.187879 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.291726 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.292206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.292279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.292358 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.292422 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.337450 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:49 crc kubenswrapper[4904]: E1121 13:32:49.337617 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:49 crc kubenswrapper[4904]: E1121 13:32:49.337833 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:32:51.337809752 +0000 UTC m=+45.459342304 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.397482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.400107 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.400286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.400404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.400496 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.503215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.503727 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.503878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.504020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.504159 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.512722 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:49 crc kubenswrapper[4904]: E1121 13:32:49.512883 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.606768 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.606821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.606834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.606855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.606869 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.710753 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.710827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.710842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.710864 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.710879 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.813822 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.813917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.813933 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.813949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.813962 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.917037 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.917111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.917122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.917136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:49 crc kubenswrapper[4904]: I1121 13:32:49.917146 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:49Z","lastTransitionTime":"2025-11-21T13:32:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.021976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.022057 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.022079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.022113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.022133 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.126351 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.126422 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.126437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.126465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.126487 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.230321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.230383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.230393 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.230412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.230426 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.334542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.334631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.334696 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.334736 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.334764 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.439436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.439543 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.439612 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.439717 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.439751 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.512799 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.512855 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.512806 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:50 crc kubenswrapper[4904]: E1121 13:32:50.512993 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:50 crc kubenswrapper[4904]: E1121 13:32:50.513203 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:50 crc kubenswrapper[4904]: E1121 13:32:50.513449 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.543563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.543619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.543650 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.543696 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.543711 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.647722 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.647786 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.647802 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.647827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.647849 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.750875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.750959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.750983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.751019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.751041 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.854444 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.854926 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.854943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.854976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.854990 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.958198 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.958367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.958398 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.958442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:50 crc kubenswrapper[4904]: I1121 13:32:50.958468 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:50Z","lastTransitionTime":"2025-11-21T13:32:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.061048 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.061095 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.061105 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.061119 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.061134 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.164091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.164202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.164222 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.164245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.164265 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.268716 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.268801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.268819 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.268846 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.268864 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.363936 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:51 crc kubenswrapper[4904]: E1121 13:32:51.364076 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:51 crc kubenswrapper[4904]: E1121 13:32:51.364192 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:32:55.364168026 +0000 UTC m=+49.485700578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.370986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.371030 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.371042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.371057 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.371066 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.473701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.473791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.473813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.473853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.473874 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.512972 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:51 crc kubenswrapper[4904]: E1121 13:32:51.513147 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.577189 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.577253 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.577263 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.577280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.577290 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.680125 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.680540 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.680722 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.680886 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.681015 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.783802 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.783892 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.783905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.783923 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.783935 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.886487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.886780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.886935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.887029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.887120 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.990963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.991005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.991021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.991045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:51 crc kubenswrapper[4904]: I1121 13:32:51.991059 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:51Z","lastTransitionTime":"2025-11-21T13:32:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.094583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.094699 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.094719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.094754 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.094775 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.199223 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.199514 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.199706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.199845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.199955 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.305134 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.305201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.305219 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.305250 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.305269 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.409341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.409391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.409402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.409424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.409447 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.511859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.511911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.511921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.511942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.511953 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.512209 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.512236 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:52 crc kubenswrapper[4904]: E1121 13:32:52.512320 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:52 crc kubenswrapper[4904]: E1121 13:32:52.512476 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.512595 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:52 crc kubenswrapper[4904]: E1121 13:32:52.512921 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.615512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.615582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.615599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.615628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.615651 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.720499 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.720848 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.720882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.720925 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.720947 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.825304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.825387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.825406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.825435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.825455 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.929203 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.929278 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.929301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.929335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:52 crc kubenswrapper[4904]: I1121 13:32:52.929358 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:52Z","lastTransitionTime":"2025-11-21T13:32:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.033630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.033705 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.033720 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.033738 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.033750 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.136247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.136290 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.136298 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.136314 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.136324 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.239441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.239487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.239518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.239540 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.239553 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.342224 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.342296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.342313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.342343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.342361 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.344041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.344084 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.344093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.344109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.344120 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.362933 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:53Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.369745 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.369851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.369872 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.369899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.369919 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.388263 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:53Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.393843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.393920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.393952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.393989 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.394030 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.413046 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:53Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.420020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.420093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.420106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.420136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.420156 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.436235 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:53Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.443060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.443133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.443161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.443195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.443221 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.465617 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:53Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.465850 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.468357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.468445 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.468469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.468499 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.468516 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.512849 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:53 crc kubenswrapper[4904]: E1121 13:32:53.513090 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.572921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.572994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.573014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.573040 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.573055 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.677059 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.677122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.677133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.677152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.677164 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.780699 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.780769 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.780783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.780806 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.780821 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.885867 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.885938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.885955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.885980 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.885996 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.988963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.989017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.989032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.989056 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:53 crc kubenswrapper[4904]: I1121 13:32:53.989071 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:53Z","lastTransitionTime":"2025-11-21T13:32:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.092348 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.092420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.092432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.092450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.092461 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.195560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.195627 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.195639 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.195683 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.195701 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.299157 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.299574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.299793 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.300016 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.300237 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.402701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.402744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.402774 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.402790 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.402799 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.506104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.506164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.506185 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.506211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.506231 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.512119 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.512165 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.512266 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:54 crc kubenswrapper[4904]: E1121 13:32:54.512272 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:54 crc kubenswrapper[4904]: E1121 13:32:54.512365 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:54 crc kubenswrapper[4904]: E1121 13:32:54.512442 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.608446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.608917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.609164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.609592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.609989 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.714382 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.714443 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.714460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.714486 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.714505 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.817814 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.817889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.817911 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.817940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.817964 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.921215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.921267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.921283 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.921306 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:54 crc kubenswrapper[4904]: I1121 13:32:54.921322 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:54Z","lastTransitionTime":"2025-11-21T13:32:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.023607 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.023634 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.023643 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.023673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.023681 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.126507 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.127174 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.127308 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.127402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.127496 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.230787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.230850 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.230869 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.230892 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.230910 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.334400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.334458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.334473 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.334492 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.334506 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.418163 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:55 crc kubenswrapper[4904]: E1121 13:32:55.418311 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:55 crc kubenswrapper[4904]: E1121 13:32:55.418374 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:33:03.418357649 +0000 UTC m=+57.539890211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.440011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.440063 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.440077 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.440094 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.440106 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.512823 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:55 crc kubenswrapper[4904]: E1121 13:32:55.512979 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.543993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.544072 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.544090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.544113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.544157 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.647427 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.647480 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.647494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.647511 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.647566 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.751098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.751209 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.751237 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.751270 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.751295 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.855167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.855487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.855581 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.855711 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.855813 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.959327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.959792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.959908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.960020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:55 crc kubenswrapper[4904]: I1121 13:32:55.960124 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:55Z","lastTransitionTime":"2025-11-21T13:32:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.065273 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.065324 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.065335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.065353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.065367 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.167391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.167432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.167446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.167461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.167472 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.230920 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.238217 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.245065 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.259800 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.271782 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.271838 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.271850 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.271879 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.271894 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.272003 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.284463 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.301088 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.311163 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.322214 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.334493 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.346931 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.358961 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.375341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.375389 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.375402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.375418 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.375432 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.376244 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.389989 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.415101 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.430523 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.443424 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.461010 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.478310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.478355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.478364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.478381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.478394 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.513280 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.513351 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:56 crc kubenswrapper[4904]: E1121 13:32:56.513451 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:56 crc kubenswrapper[4904]: E1121 13:32:56.513577 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.513156 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:56 crc kubenswrapper[4904]: E1121 13:32:56.514337 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.533445 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.557297 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.570729 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.581066 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.581153 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.581178 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.581211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.581236 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.585983 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.605434 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.628506 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.647721 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.659908 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.677389 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.684378 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.684847 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.685068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.685306 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.685506 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.705014 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef881e0213440bd2402c692251fdfdf63b736cb1c7100880ad7e3127e4aa7a26\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:43Z\\\",\\\"message\\\":\\\"43.937297 6161 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938004 6161 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1121 13:32:43.938021 6161 handler.go:208] Removed *v1.Node event handler 7\\\\nI1121 13:32:43.938047 6161 handler.go:208] Removed *v1.Node event handler 2\\\\nI1121 13:32:43.938151 6161 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938592 6161 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938767 6161 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1121 13:32:43.938838 6161 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1121 13:32:43.938906 6161 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.723051 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.744301 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.762245 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.780755 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.789725 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.789902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.789927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.790017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.790099 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.797992 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.813898 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.841376 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.893729 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.893900 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.893922 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.893950 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.893971 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.998065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.998133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.998143 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.998165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:56 crc kubenswrapper[4904]: I1121 13:32:56.998177 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:56Z","lastTransitionTime":"2025-11-21T13:32:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.101201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.101249 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.101258 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.101276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.101288 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.205719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.205795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.205827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.205862 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.205886 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.309839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.309899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.309919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.309943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.309960 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.414163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.414212 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.414225 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.414246 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.414258 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.512436 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:57 crc kubenswrapper[4904]: E1121 13:32:57.512727 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.516994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.517052 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.517074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.517098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.517114 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.620244 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.620285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.620298 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.620315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.620328 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.723304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.723345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.723353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.723367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.723376 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.827265 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.827328 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.827347 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.827372 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.827390 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.930254 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.930297 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.930305 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.930320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:57 crc kubenswrapper[4904]: I1121 13:32:57.930329 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:57Z","lastTransitionTime":"2025-11-21T13:32:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.034103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.034166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.034181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.034202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.034217 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.137247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.137301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.137315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.137334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.137349 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.240065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.240142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.240165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.240201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.240256 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.343210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.343260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.343269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.343285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.343294 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.350794 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.350942 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:33:30.350917962 +0000 UTC m=+84.472450514 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.445767 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.445805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.445815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.445830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.445841 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.452301 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.452374 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.452395 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.452420 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452491 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452533 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:30.452521479 +0000 UTC m=+84.574054031 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452562 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452588 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452599 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452615 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452646 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:30.452629912 +0000 UTC m=+84.574162464 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452725 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452736 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452752 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:30.452745215 +0000 UTC m=+84.574277757 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452762 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.452851 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:33:30.452820587 +0000 UTC m=+84.574353179 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.513424 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.513502 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.513581 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.513411 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.513895 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:32:58 crc kubenswrapper[4904]: E1121 13:32:58.514029 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.587217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.587262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.587272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.587287 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.587299 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.690166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.690204 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.690217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.690234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.690246 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.792706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.792744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.792752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.792767 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.792779 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.895752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.895842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.895863 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.895896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.895917 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.998253 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.998293 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.998302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.998314 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:58 crc kubenswrapper[4904]: I1121 13:32:58.998323 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:58Z","lastTransitionTime":"2025-11-21T13:32:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.100980 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.101109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.101121 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.101138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.101150 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.203616 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.203689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.203708 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.203732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.203753 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.306472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.306534 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.306551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.306574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.306592 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.410815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.410914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.410935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.410975 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.410996 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513037 4904 scope.go:117] "RemoveContainer" containerID="fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513293 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513549 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513576 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.513608 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: E1121 13:32:59.513712 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.532821 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.548224 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.562811 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.581756 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.597484 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.610157 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.616671 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.616717 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.616731 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.616751 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.616770 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.629054 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.643899 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.665162 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.678973 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.694348 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.713897 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.719416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.719513 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.719535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.719569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.719593 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.733693 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.753017 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.772221 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.787313 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.800966 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.823855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.823914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.823926 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.823949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.823963 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.911454 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/1.log" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.918502 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.918670 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.927557 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.927603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.927620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.927646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.927689 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:32:59Z","lastTransitionTime":"2025-11-21T13:32:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.945895 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.969790 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:32:59 crc kubenswrapper[4904]: I1121 13:32:59.992571 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:32:59Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.014156 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.028550 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.030469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.030494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.030504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.030523 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.030533 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.042050 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.055475 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.069690 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.093753 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.114331 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.129175 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.133712 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.133762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.133772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.133794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.133808 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.147254 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.161790 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.176073 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.190622 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.205323 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.220612 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.236395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.236448 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.236460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.236479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.236492 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.339663 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.340409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.340477 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.340548 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.340611 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.443456 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.443516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.443529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.443551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.443566 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.512871 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:00 crc kubenswrapper[4904]: E1121 13:33:00.513254 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.513064 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:00 crc kubenswrapper[4904]: E1121 13:33:00.514042 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.512988 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:00 crc kubenswrapper[4904]: E1121 13:33:00.514286 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.546516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.546778 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.546899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.547009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.547079 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.654092 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.654141 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.654152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.654170 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.654185 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.757527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.757586 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.757596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.757618 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.757630 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.860333 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.860433 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.860450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.860479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.860498 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.924231 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/2.log" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.925261 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/1.log" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.928786 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030" exitCode=1 Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.928830 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.928881 4904 scope.go:117] "RemoveContainer" containerID="fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.929689 4904 scope.go:117] "RemoveContainer" containerID="6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030" Nov 21 13:33:00 crc kubenswrapper[4904]: E1121 13:33:00.929884 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.953977 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.963228 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.963267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.963279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.963297 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.963309 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:00Z","lastTransitionTime":"2025-11-21T13:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.974748 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:00 crc kubenswrapper[4904]: I1121 13:33:00.994924 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:00Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.012792 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.032880 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.049428 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.065831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.066073 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.066161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.066263 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.066358 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.066560 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.086318 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.133226 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.150420 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.170436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.170489 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.170502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.170524 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.170537 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.170671 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.188586 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.207354 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.222395 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.241211 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.267815 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc54efc5c0bbc6f5f776adfac070bab7ecfa996fdb1c5f9fcbbda723911cf957\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-scheduler/scheduler_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-scheduler/scheduler\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.169\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1121 13:32:47.436631 6332 services_controller.go:360] Finished syncing service metrics on namespace openshift-kube-storage-version-migrator-operator for network=default : 1.274671ms\\\\nF1121 13:32:47.436632 6332 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.272830 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.272869 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.272882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.272902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.272915 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.284584 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:01Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.376377 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.376430 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.376442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.376461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.376472 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.480344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.480394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.480404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.480421 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.480431 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.512381 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:01 crc kubenswrapper[4904]: E1121 13:33:01.512678 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.583798 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.583871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.583898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.583931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.583956 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.687741 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.687829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.687842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.687867 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.687881 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.792370 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.792432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.792441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.792458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.792470 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.896032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.896116 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.896135 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.896165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.896188 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.936213 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/2.log" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.999896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:01 crc kubenswrapper[4904]: I1121 13:33:01.999952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:01.999966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:01.999986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:01.999999 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:01Z","lastTransitionTime":"2025-11-21T13:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.106777 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.106826 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.106839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.106857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.106872 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.210277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.210371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.210392 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.210424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.211133 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.313815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.313857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.313865 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.313878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.313890 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.417827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.417887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.417900 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.417920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.417933 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.512606 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.512651 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.512753 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:02 crc kubenswrapper[4904]: E1121 13:33:02.512853 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:02 crc kubenswrapper[4904]: E1121 13:33:02.513011 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:02 crc kubenswrapper[4904]: E1121 13:33:02.513143 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.521144 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.521207 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.521225 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.521246 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.521265 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.624645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.624687 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.624695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.624710 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.624719 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.727568 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.727609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.727620 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.727636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.727648 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.742298 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.743179 4904 scope.go:117] "RemoveContainer" containerID="6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030" Nov 21 13:33:02 crc kubenswrapper[4904]: E1121 13:33:02.743366 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.753592 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.770755 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.796008 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.807892 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.820902 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.830464 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.830531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.830544 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.830564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.830575 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.842564 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.856851 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.875379 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.886421 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.895975 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.907343 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.917975 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.928768 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.932401 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.932464 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.932477 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.932496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.932509 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:02Z","lastTransitionTime":"2025-11-21T13:33:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.944252 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.956590 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.966628 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:02 crc kubenswrapper[4904]: I1121 13:33:02.975893 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:02Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.034420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.034464 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.034474 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.034490 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.034506 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.137134 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.137193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.137205 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.137217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.137227 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.240343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.240394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.240403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.240417 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.240426 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.342945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.342977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.342985 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.343000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.343009 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.437966 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.438096 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.438149 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:33:19.438135337 +0000 UTC m=+73.559667889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.447893 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.448036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.448049 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.448070 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.448098 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.512726 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.512943 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.530020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.530058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.530070 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.530089 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.530102 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.551172 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.555638 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.555701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.555718 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.555740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.555757 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.577138 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.583405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.583738 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.583891 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.584026 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.584142 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.606706 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.612493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.612538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.612553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.612570 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.612584 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.684805 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.692482 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.692775 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.692916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.693066 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.693192 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.710752 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:03Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:03 crc kubenswrapper[4904]: E1121 13:33:03.710963 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.714046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.714115 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.714133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.714163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.714189 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.817787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.817856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.817874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.817905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.817927 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.921756 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.921820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.921833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.921858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:03 crc kubenswrapper[4904]: I1121 13:33:03.921874 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:03Z","lastTransitionTime":"2025-11-21T13:33:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.025776 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.026168 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.026368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.026566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.026820 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.129328 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.129373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.129388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.129405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.129417 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.232714 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.232817 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.232853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.232891 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.232914 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.335986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.336400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.336467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.336539 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.336603 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.440104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.440859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.440898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.440925 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.440940 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.512933 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.512972 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.513097 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:04 crc kubenswrapper[4904]: E1121 13:33:04.513302 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:04 crc kubenswrapper[4904]: E1121 13:33:04.513504 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:04 crc kubenswrapper[4904]: E1121 13:33:04.513785 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.544592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.544733 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.544754 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.544787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.544819 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.648537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.648597 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.648608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.648627 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.648638 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.752109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.752197 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.752232 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.752276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.752300 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.855632 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.855735 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.855755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.855790 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.855816 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.958642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.959223 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.959241 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.959265 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:04 crc kubenswrapper[4904]: I1121 13:33:04.959284 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:04Z","lastTransitionTime":"2025-11-21T13:33:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.069761 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.069801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.069811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.069825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.069834 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.173826 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.173888 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.173898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.173914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.173923 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.277387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.277423 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.277431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.277446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.277454 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.380147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.380192 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.380204 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.380219 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.380233 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.482506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.482558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.482577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.482601 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.482617 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.512583 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:05 crc kubenswrapper[4904]: E1121 13:33:05.512770 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.585613 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.587112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.588156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.588840 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.588906 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.691988 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.692223 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.692324 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.692403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.692538 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.796267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.796388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.796414 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.796453 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.796481 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.899316 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.899414 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.899437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.899468 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:05 crc kubenswrapper[4904]: I1121 13:33:05.899487 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:05Z","lastTransitionTime":"2025-11-21T13:33:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.002233 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.002278 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.002286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.002302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.002322 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.105403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.105457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.105467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.105485 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.105500 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.207783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.207833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.207845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.207861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.207874 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.311317 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.311400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.311422 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.311456 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.311478 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.414788 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.414842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.414851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.414875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.414888 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.512233 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.512270 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.512281 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:06 crc kubenswrapper[4904]: E1121 13:33:06.512383 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:06 crc kubenswrapper[4904]: E1121 13:33:06.512588 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:06 crc kubenswrapper[4904]: E1121 13:33:06.512884 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.521162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.521247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.521272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.521304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.521340 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.535414 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.555156 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.577047 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.592627 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.605483 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.618885 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.628744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.628781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.628808 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.628826 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.628835 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.636520 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.650727 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.668902 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.680125 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.694232 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.712645 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.728534 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.732513 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.732603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.732626 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.732699 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.732723 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.748359 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.767576 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.782742 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.795740 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:06Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.834784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.834859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.834875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.834895 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.834946 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.938429 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.938498 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.938516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.938540 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:06 crc kubenswrapper[4904]: I1121 13:33:06.938559 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:06Z","lastTransitionTime":"2025-11-21T13:33:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.041517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.041603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.041619 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.041643 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.041678 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.145490 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.145588 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.145603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.145628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.145646 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.249191 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.249243 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.249252 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.249269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.249284 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.353088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.353152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.353167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.353192 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.353208 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.456513 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.456586 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.456605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.456636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.456709 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.512527 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:07 crc kubenswrapper[4904]: E1121 13:33:07.512816 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.559541 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.559591 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.559605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.559630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.559643 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.662529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.662596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.662612 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.662631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.662642 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.765329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.765372 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.765381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.765396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.765406 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.868437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.868541 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.868564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.868594 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.868613 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.972305 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.972354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.972366 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.972390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:07 crc kubenswrapper[4904]: I1121 13:33:07.972401 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:07Z","lastTransitionTime":"2025-11-21T13:33:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.075625 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.075949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.075985 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.076019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.076041 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.179744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.179806 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.179820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.179839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.179850 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.282799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.282865 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.282881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.282909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.282991 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.385805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.385875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.385887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.385916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.385964 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.489386 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.489460 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.489506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.489527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.489539 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.512959 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:08 crc kubenswrapper[4904]: E1121 13:33:08.513125 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.513357 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:08 crc kubenswrapper[4904]: E1121 13:33:08.513424 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.513618 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:08 crc kubenswrapper[4904]: E1121 13:33:08.513703 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.592862 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.592901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.592913 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.592930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.592942 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.737448 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.737509 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.737524 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.737553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.737565 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.840471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.840527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.840539 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.840562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.840580 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.943505 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.943549 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.943558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.943574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:08 crc kubenswrapper[4904]: I1121 13:33:08.943585 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:08Z","lastTransitionTime":"2025-11-21T13:33:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.046033 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.046109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.046122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.046145 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.046162 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.149630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.149711 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.149725 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.149746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.149760 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.252734 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.259637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.259666 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.259697 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.259711 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.362564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.362612 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.362623 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.362642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.362684 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.467706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.467782 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.467799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.467829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.467849 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.512965 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:09 crc kubenswrapper[4904]: E1121 13:33:09.513104 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.569842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.569883 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.569892 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.569904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.569914 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.673185 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.673261 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.673299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.673337 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.673361 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.776379 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.776446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.776465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.776499 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.776523 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.880381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.880436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.880448 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.880466 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.880476 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.983642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.983693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.983704 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.983719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:09 crc kubenswrapper[4904]: I1121 13:33:09.983730 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:09Z","lastTransitionTime":"2025-11-21T13:33:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.089175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.089257 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.089276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.089351 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.089370 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.192835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.192888 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.192897 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.192913 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.192922 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.295245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.295275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.295288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.295302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.295314 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.398745 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.398793 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.398804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.398822 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.398836 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.500481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.500522 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.500535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.500553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.500564 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.512788 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:10 crc kubenswrapper[4904]: E1121 13:33:10.512890 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.512989 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:10 crc kubenswrapper[4904]: E1121 13:33:10.513220 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.513627 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:10 crc kubenswrapper[4904]: E1121 13:33:10.513807 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.603596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.603636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.603645 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.603676 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.603687 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.706221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.706315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.706328 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.706353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.706371 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.809298 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.809367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.809379 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.809401 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.809412 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.913182 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.913244 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.913273 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.913298 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:10 crc kubenswrapper[4904]: I1121 13:33:10.913312 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:10Z","lastTransitionTime":"2025-11-21T13:33:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.016707 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.016783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.016805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.016835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.016857 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.120032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.120116 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.120138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.120169 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.120191 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.223864 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.224290 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.224299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.224316 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.224327 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.327090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.327154 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.327172 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.327200 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.327217 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.430286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.430337 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.430347 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.430366 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.430378 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.512430 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:11 crc kubenswrapper[4904]: E1121 13:33:11.512577 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.537926 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.538054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.538074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.538096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.538111 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.641561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.641622 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.641640 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.641708 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.641736 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.745920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.746043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.746065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.746096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.746116 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.850186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.850269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.850287 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.850318 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.850338 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.954447 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.954511 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.954529 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.954551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:11 crc kubenswrapper[4904]: I1121 13:33:11.954566 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:11Z","lastTransitionTime":"2025-11-21T13:33:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.057916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.057965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.057975 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.057991 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.058004 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.161847 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.161927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.161945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.161974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.161999 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.264592 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.264693 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.264719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.264747 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.264770 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.368436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.368516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.368527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.368541 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.368552 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.471714 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.471801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.471821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.471850 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.471872 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.524848 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.524943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:12 crc kubenswrapper[4904]: E1121 13:33:12.525118 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:12 crc kubenswrapper[4904]: E1121 13:33:12.525245 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.525696 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:12 crc kubenswrapper[4904]: E1121 13:33:12.525852 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.574698 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.574746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.574758 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.574775 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.574789 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.677808 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.677897 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.677920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.677952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.677976 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.780117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.780152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.780162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.780175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.780184 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.883819 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.883877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.883895 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.883918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.883933 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.986409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.986458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.986469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.986492 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:12 crc kubenswrapper[4904]: I1121 13:33:12.986503 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:12Z","lastTransitionTime":"2025-11-21T13:33:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.088744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.088781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.088790 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.088804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.088812 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.192279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.192384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.192404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.192436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.192455 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.295764 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.295811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.295820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.295834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.295844 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.398939 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.398978 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.398988 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.399000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.399009 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.501423 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.501473 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.501487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.501507 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.501520 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.512697 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:13 crc kubenswrapper[4904]: E1121 13:33:13.512879 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.603126 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.603165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.603188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.603202 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.603210 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.705823 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.705879 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.705895 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.705918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.705931 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.808038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.808085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.808097 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.808113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.808124 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.909842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.909881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.909889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.909902 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:13 crc kubenswrapper[4904]: I1121 13:33:13.909911 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:13Z","lastTransitionTime":"2025-11-21T13:33:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.012186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.012227 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.012237 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.012252 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.012261 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.075009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.075053 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.075066 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.075083 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.075093 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.085938 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.088699 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.088744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.088754 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.088771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.088789 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.098883 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.101324 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.101364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.101373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.101386 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.101396 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.111446 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.114331 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.114373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.114381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.114395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.114404 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.123977 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.126747 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.127404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.127440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.127462 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.127476 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.137383 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:14Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.137560 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.139038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.139075 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.139085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.139099 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.139108 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.241899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.241942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.241953 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.241968 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.241979 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.344027 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.344057 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.344065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.344076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.344088 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.446164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.446227 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.446241 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.446255 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.446266 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.512901 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.512984 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.512901 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.513025 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.513129 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.513175 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.513850 4904 scope.go:117] "RemoveContainer" containerID="6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030" Nov 21 13:33:14 crc kubenswrapper[4904]: E1121 13:33:14.514110 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.548380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.548405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.548413 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.548426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.548434 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.651033 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.651086 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.651098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.651112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.651123 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.753743 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.753795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.753812 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.753832 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.753843 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.856926 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.856984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.857002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.857019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.857041 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.959801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.959869 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.959886 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.959904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:14 crc kubenswrapper[4904]: I1121 13:33:14.959916 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:14Z","lastTransitionTime":"2025-11-21T13:33:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.062698 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.062745 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.062758 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.062779 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.062795 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.165467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.165516 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.165530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.165569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.165586 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.269035 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.269099 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.269109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.269131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.269143 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.373318 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.373386 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.373400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.373466 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.373484 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.478018 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.478110 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.478131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.478161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.478181 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.513010 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:15 crc kubenswrapper[4904]: E1121 13:33:15.513203 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.581091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.581136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.581149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.581168 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.581181 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.684578 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.684644 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.684676 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.684694 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.684710 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.788430 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.788517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.788608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.788635 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.788702 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.892604 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.892732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.892759 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.892794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.892820 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.995067 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.995112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.995123 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.995136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:15 crc kubenswrapper[4904]: I1121 13:33:15.995144 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:15Z","lastTransitionTime":"2025-11-21T13:33:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.097928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.097987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.098000 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.098021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.098034 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.200510 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.200606 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.200621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.200643 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.200674 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.303599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.303631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.303641 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.303668 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.303677 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.406380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.406426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.406435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.406457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.406467 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.508446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.508502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.508517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.508534 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.508546 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.512191 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.512317 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.512408 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:16 crc kubenswrapper[4904]: E1121 13:33:16.512445 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:16 crc kubenswrapper[4904]: E1121 13:33:16.513070 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:16 crc kubenswrapper[4904]: E1121 13:33:16.513417 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.524517 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.539878 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.550718 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.561523 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.571227 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.580685 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.591027 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.600316 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.610740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.610777 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.610787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.610804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.610815 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.615819 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.630868 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.644988 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.654558 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.663800 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.673970 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.687462 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.700166 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.710850 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:16Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.714222 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.714283 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.714298 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.714315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.714327 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.816750 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.816792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.816806 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.816823 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.816836 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.919323 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.919363 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.919373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.919388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:16 crc kubenswrapper[4904]: I1121 13:33:16.919399 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:16Z","lastTransitionTime":"2025-11-21T13:33:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.023374 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.023462 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.023489 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.023525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.023552 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.127387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.127453 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.127472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.127496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.127513 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.230173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.230210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.230220 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.230235 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.230243 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.334305 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.334370 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.334383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.334404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.334419 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.437507 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.437565 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.437582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.437602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.437618 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.513193 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:17 crc kubenswrapper[4904]: E1121 13:33:17.513455 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.541042 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.541101 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.541119 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.541146 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.541168 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.644215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.644282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.644296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.644319 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.644332 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.747970 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.748031 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.748045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.748066 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.748158 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.850784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.851014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.851072 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.851130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.851182 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.960286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.960349 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.960359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.960376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:17 crc kubenswrapper[4904]: I1121 13:33:17.960387 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:17Z","lastTransitionTime":"2025-11-21T13:33:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.062689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.062727 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.062735 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.062749 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.062758 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.164942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.164993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.165005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.165023 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.165033 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.267232 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.267276 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.267286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.267301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.267310 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.370909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.370963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.370974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.370993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.371004 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.474161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.474248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.474280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.474317 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.474336 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.512702 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.512769 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:18 crc kubenswrapper[4904]: E1121 13:33:18.512824 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.512770 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:18 crc kubenswrapper[4904]: E1121 13:33:18.512945 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:18 crc kubenswrapper[4904]: E1121 13:33:18.513069 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.577591 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.577648 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.577690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.577724 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.577741 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.681094 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.681598 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.681724 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.681831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.681920 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.785632 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.785710 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.785724 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.785745 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.785758 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.888805 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.888848 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.888860 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.888875 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.888885 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.990442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.990481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.990489 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.990502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:18 crc kubenswrapper[4904]: I1121 13:33:18.990511 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:18Z","lastTransitionTime":"2025-11-21T13:33:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.093880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.093922 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.093931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.093945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.093954 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.196171 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.196395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.196491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.196555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.196629 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.299089 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.299145 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.299156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.299174 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.299186 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.402366 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.402424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.402440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.402461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.402479 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.455612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:19 crc kubenswrapper[4904]: E1121 13:33:19.455772 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:19 crc kubenswrapper[4904]: E1121 13:33:19.455822 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:33:51.455806229 +0000 UTC m=+105.577338781 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.505424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.505765 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.505869 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.505971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.506058 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.512757 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:19 crc kubenswrapper[4904]: E1121 13:33:19.512893 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.609300 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.609349 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.609361 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.609380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.609391 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.713299 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.713373 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.713392 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.713429 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.713453 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.816930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.816974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.816987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.817005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.817018 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.920176 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.920221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.920231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.920245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:19 crc kubenswrapper[4904]: I1121 13:33:19.920259 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:19Z","lastTransitionTime":"2025-11-21T13:33:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.022686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.022734 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.022746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.022764 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.022777 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.129491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.129779 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.129855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.129924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.129989 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.232633 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.232784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.232803 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.232832 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.232851 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.335837 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.335880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.335890 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.335905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.335915 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.437984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.438090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.438131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.438164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.438174 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.512948 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.512980 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:20 crc kubenswrapper[4904]: E1121 13:33:20.513173 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.513008 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:20 crc kubenswrapper[4904]: E1121 13:33:20.513308 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:20 crc kubenswrapper[4904]: E1121 13:33:20.513358 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.539870 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.539929 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.539938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.539951 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.539960 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.641846 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.641920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.641939 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.641962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.641982 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.744391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.744440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.744452 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.744471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.744485 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.848130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.848170 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.848179 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.848193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.848202 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.949905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.949946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.949954 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.949971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:20 crc kubenswrapper[4904]: I1121 13:33:20.949980 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:20Z","lastTransitionTime":"2025-11-21T13:33:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.052609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.052690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.052708 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.052732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.052747 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.154984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.155267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.155331 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.155409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.155466 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.257944 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.257993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.258005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.258022 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.258036 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.360453 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.360493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.360502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.360517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.360527 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.465346 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.465678 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.465780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.465891 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.465993 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.512358 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:21 crc kubenswrapper[4904]: E1121 13:33:21.512877 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.568301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.568360 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.568377 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.568401 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.568417 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.672431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.672504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.672527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.672564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.672587 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.775527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.775555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.775563 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.775575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.775584 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.878441 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.878504 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.878526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.878555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.878576 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.980962 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.980995 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.981041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.981058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:21 crc kubenswrapper[4904]: I1121 13:33:21.981070 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:21Z","lastTransitionTime":"2025-11-21T13:33:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.010265 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/0.log" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.010316 4904 generic.go:334] "Generic (PLEG): container finished" podID="190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a" containerID="e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d" exitCode=1 Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.010341 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerDied","Data":"e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.010831 4904 scope.go:117] "RemoveContainer" containerID="e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.032506 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.049326 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.078251 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.084188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.084231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.084271 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.084289 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.084301 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.092994 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.107531 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.122400 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.136462 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.148582 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.163224 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.174759 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.186466 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.186505 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.186515 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.186531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.186541 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.190127 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.205149 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.222227 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.237721 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.255920 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.269429 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.287356 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:22Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.289877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.289989 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.290007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.290058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.290077 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.393154 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.393236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.393267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.393302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.393330 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.495930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.495959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.495971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.495986 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.495997 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.512802 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:22 crc kubenswrapper[4904]: E1121 13:33:22.512959 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.513162 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:22 crc kubenswrapper[4904]: E1121 13:33:22.513234 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.513379 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:22 crc kubenswrapper[4904]: E1121 13:33:22.513444 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.598792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.598843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.598852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.598878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.598888 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.701467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.701523 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.701551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.701579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.701600 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.804227 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.804275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.804291 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.804308 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.804319 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.907071 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.907109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.907118 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.907130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:22 crc kubenswrapper[4904]: I1121 13:33:22.907140 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:22Z","lastTransitionTime":"2025-11-21T13:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.008783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.008824 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.008836 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.008853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.008866 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.014550 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/0.log" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.014595 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerStarted","Data":"d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.027472 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.040174 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.051796 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.064264 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.073681 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.083630 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.095018 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.107534 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.111117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.111159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.111171 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.111186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.111197 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.120170 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.133096 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.147909 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.161045 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.174550 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.191033 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.207263 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.213587 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.213708 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.213722 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.213740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.213753 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.222120 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.240845 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:23Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.315542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.315595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.315605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.315622 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.315633 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.420366 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.420442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.420461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.420488 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.420512 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.512946 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:23 crc kubenswrapper[4904]: E1121 13:33:23.513083 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.523690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.523752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.523763 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.523781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.523793 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.627004 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.627076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.627089 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.627113 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.627128 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.729846 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.729916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.729929 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.729956 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.729972 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.832896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.832935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.832947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.832963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.832974 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.936508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.936583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.936608 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.936636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:23 crc kubenswrapper[4904]: I1121 13:33:23.936692 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:23Z","lastTransitionTime":"2025-11-21T13:33:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.039272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.039344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.039356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.039379 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.039394 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.142257 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.142298 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.142307 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.142330 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.142341 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.150984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.151037 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.151054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.151074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.151089 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.167459 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.172088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.172139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.172150 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.172168 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.172181 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.186094 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.190275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.190319 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.190333 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.190350 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.190364 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.202312 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.207117 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.207177 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.207188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.207206 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.207221 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.224890 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.228974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.229181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.229355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.229491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.229679 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.243827 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:24Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.244021 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.245606 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.245731 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.245795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.245864 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.245926 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.347625 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.347891 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.348014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.348121 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.348195 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.450635 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.450689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.450700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.450714 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.450725 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.512890 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.513007 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.513074 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.513132 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.513335 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:24 crc kubenswrapper[4904]: E1121 13:33:24.513459 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.552982 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.553251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.553343 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.553408 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.553471 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.655974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.656345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.656508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.656700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.656850 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.759288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.759344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.759356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.759374 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.759388 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.862915 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.863095 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.863115 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.863139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.863156 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.965960 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.966003 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.966017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.966032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:24 crc kubenswrapper[4904]: I1121 13:33:24.966040 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:24Z","lastTransitionTime":"2025-11-21T13:33:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.068916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.068982 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.069004 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.069058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.069070 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.172705 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.172835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.172852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.172871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.172886 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.275046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.275088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.275102 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.275121 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.275133 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.379391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.379446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.379454 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.379469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.379478 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.482523 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.482567 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.482581 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.482596 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.482608 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.512704 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:25 crc kubenswrapper[4904]: E1121 13:33:25.513447 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.514143 4904 scope.go:117] "RemoveContainer" containerID="6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.586760 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.586807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.586819 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.586839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.586851 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.691977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.692015 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.692025 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.692039 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.692049 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.799732 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.799771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.799783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.799795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.799805 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.903005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.903053 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.903062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.903077 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:25 crc kubenswrapper[4904]: I1121 13:33:25.903087 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:25Z","lastTransitionTime":"2025-11-21T13:33:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.006402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.006471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.006489 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.006514 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.006534 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.031832 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/2.log" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.035019 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.035397 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.054607 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.073138 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.092359 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.104633 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.113287 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.115135 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.115166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.115176 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.115191 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.115200 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.130498 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.146263 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.158195 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.170853 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.179907 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.191888 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.205678 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.217451 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.217984 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.218036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.218050 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.218072 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.218085 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.229609 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.242245 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.251871 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.271183 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.320450 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.320496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.320506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.320523 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.320533 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.423371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.423414 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.423426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.423440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.423450 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.512776 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.512836 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:26 crc kubenswrapper[4904]: E1121 13:33:26.513074 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:26 crc kubenswrapper[4904]: E1121 13:33:26.513188 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.512817 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:26 crc kubenswrapper[4904]: E1121 13:33:26.513365 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.532861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.532924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.532941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.532963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.532978 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.534507 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.545680 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.567803 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.579294 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.589867 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.602328 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.614524 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.627634 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.634508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.634562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.634574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.634593 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.634608 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.648326 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.663629 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.679569 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.698077 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.713819 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.730246 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.736793 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.736829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.736837 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.736849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.736863 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.748553 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.758706 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.772063 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:26Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.839013 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.839050 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.839076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.839091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.839101 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.941165 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.941214 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.941225 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.941240 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:26 crc kubenswrapper[4904]: I1121 13:33:26.941251 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:26Z","lastTransitionTime":"2025-11-21T13:33:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.040500 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/3.log" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.041856 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/2.log" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.042906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.042956 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.042974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.042999 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.043035 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.044752 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" exitCode=1 Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.044812 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.044897 4904 scope.go:117] "RemoveContainer" containerID="6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.045897 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:33:27 crc kubenswrapper[4904]: E1121 13:33:27.046219 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.061684 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.074152 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.093458 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6af8d0d629249207d06be098c4b669364fa51c4527511ebceb02554a9a309030\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:00Z\\\",\\\"message\\\":\\\"v1.Namespace event handler 1\\\\nI1121 13:33:00.547388 6534 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1121 13:33:00.547396 6534 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1121 13:33:00.547477 6534 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547532 6534 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1121 13:33:00.547565 6534 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1121 13:33:00.547572 6534 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1121 13:33:00.547593 6534 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1121 13:33:00.547599 6534 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1121 13:33:00.547605 6534 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1121 13:33:00.547615 6534 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1121 13:33:00.547626 6534 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1121 13:33:00.547670 6534 factory.go:656] Stopping watch factory\\\\nI1121 13:33:00.547683 6534 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1121 13:33:00.547697 6534 ovnkube.go:599] Stopped ovnkube\\\\nI1121 13:33:00.547732 6534 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1121 13:33:00.547829 6534 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:26Z\\\",\\\"message\\\":\\\" 13:33:26.478590 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478594 6895 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1121 13:33:26.478598 6895 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1121 13:33:26.478606 6895 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478613 6895 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nI1121 13:33:26.478617 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nF1121 13:33:26.478616 6895 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.107255 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.119308 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.129614 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.142692 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.145709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.145782 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.145797 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.145852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.145870 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.154174 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.165373 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.176189 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.187147 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.198804 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.212171 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.225310 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.238110 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.260845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.260884 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.260891 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.260905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.260914 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.263472 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.283600 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:27Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.363152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.363189 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.363198 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.363214 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.363227 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.465475 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.465535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.465545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.465561 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.465572 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.512505 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:27 crc kubenswrapper[4904]: E1121 13:33:27.512640 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.567255 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.567300 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.567309 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.567322 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.567336 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.670244 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.670315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.670328 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.670341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.670349 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.773088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.773124 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.773133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.773146 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.773164 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.875173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.875217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.875228 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.875245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.875256 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.978036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.978078 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.978086 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.978101 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:27 crc kubenswrapper[4904]: I1121 13:33:27.978109 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:27Z","lastTransitionTime":"2025-11-21T13:33:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.049856 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/3.log" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.055235 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:33:28 crc kubenswrapper[4904]: E1121 13:33:28.055374 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.077153 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.080977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.081039 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.081056 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.081078 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.081098 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.090945 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.104256 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.115441 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.131210 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.152951 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:26Z\\\",\\\"message\\\":\\\" 13:33:26.478590 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478594 6895 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1121 13:33:26.478598 6895 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1121 13:33:26.478606 6895 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478613 6895 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nI1121 13:33:26.478617 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nF1121 13:33:26.478616 6895 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.166251 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.180124 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.183673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.183712 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.183723 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.183740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.183749 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.192215 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.203856 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.219553 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.232428 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.246060 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.262721 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.275978 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.289354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.289394 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.289403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.289418 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.289427 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.293773 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.304257 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:28Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.391092 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.391156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.391167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.391181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.391190 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.493264 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.493306 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.493315 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.493329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.493338 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.512827 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.512876 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.512832 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:28 crc kubenswrapper[4904]: E1121 13:33:28.512945 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:28 crc kubenswrapper[4904]: E1121 13:33:28.513009 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:28 crc kubenswrapper[4904]: E1121 13:33:28.513083 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.595344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.595383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.595399 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.595417 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.595433 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.697528 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.697584 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.697595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.697614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.697690 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.800390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.800431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.800440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.800455 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.800465 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.902888 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.902941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.902953 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.902971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:28 crc kubenswrapper[4904]: I1121 13:33:28.902986 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:28Z","lastTransitionTime":"2025-11-21T13:33:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.005436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.005471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.005481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.005493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.005503 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.107959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.108012 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.108025 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.108045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.108057 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.210579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.210636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.210674 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.210698 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.210714 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.312826 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.312859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.312868 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.312880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.312897 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.415292 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.415336 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.415345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.415359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.415368 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.512538 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:29 crc kubenswrapper[4904]: E1121 13:33:29.512711 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.517566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.517631 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.517689 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.517722 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.517743 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.620268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.620320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.620331 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.620346 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.620354 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.723106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.723155 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.723167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.723182 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.723197 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.825076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.825114 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.825122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.825164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.825178 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.928287 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.928329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.928342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.928364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:29 crc kubenswrapper[4904]: I1121 13:33:29.928380 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:29Z","lastTransitionTime":"2025-11-21T13:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.031916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.031983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.032002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.032028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.032047 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.135043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.135139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.135164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.135201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.135226 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.238345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.238400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.238413 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.238432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.238450 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.340937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.340987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.341010 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.341027 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.341037 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.368438 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.368604 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:34.368568555 +0000 UTC m=+148.490101147 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.444435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.444506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.444518 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.444535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.444568 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.469438 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.469509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.469547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.469601 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469736 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469761 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469812 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469827 4904 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469784 4904 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469908 4904 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469771 4904 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469748 4904 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.469886 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:34.469868445 +0000 UTC m=+148.591401067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.470062 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:34.47003962 +0000 UTC m=+148.591572232 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.470082 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:34.470071221 +0000 UTC m=+148.591603873 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.470103 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:34.470091431 +0000 UTC m=+148.591624083 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.512784 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.512866 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.512904 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.512947 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.513092 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:30 crc kubenswrapper[4904]: E1121 13:33:30.513249 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.547103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.547148 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.547156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.547169 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.547207 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.649955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.649995 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.650007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.650028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.650041 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.756791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.756829 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.756841 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.756858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.756872 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.859849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.859896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.859908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.859924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.859936 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.962564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.962626 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.962640 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.962681 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:30 crc kubenswrapper[4904]: I1121 13:33:30.962696 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:30Z","lastTransitionTime":"2025-11-21T13:33:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.064533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.064585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.064603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.064621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.064635 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.167734 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.167813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.167831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.167856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.167872 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.270261 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.270327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.270340 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.270362 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.270377 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.372882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.372918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.372926 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.372940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.372949 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.476527 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.476578 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.476587 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.476602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.476612 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.512626 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:31 crc kubenswrapper[4904]: E1121 13:33:31.512980 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.579892 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.580142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.580155 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.580175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.580192 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.683788 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.683858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.683877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.683904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.683925 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.786871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.786906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.786914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.786927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.786936 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.889487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.889526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.889535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.889547 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.889556 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.991767 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.991807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.991817 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.991831 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:31 crc kubenswrapper[4904]: I1121 13:33:31.991839 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:31Z","lastTransitionTime":"2025-11-21T13:33:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.094046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.094081 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.094091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.094106 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.094115 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.197143 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.197184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.197193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.197210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.197221 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.299611 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.299665 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.299675 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.299690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.299700 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.402114 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.402163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.402173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.402191 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.402202 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.504931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.504973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.504985 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.505002 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.505015 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.512579 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.512640 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:32 crc kubenswrapper[4904]: E1121 13:33:32.512717 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.512603 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:32 crc kubenswrapper[4904]: E1121 13:33:32.512910 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:32 crc kubenswrapper[4904]: E1121 13:33:32.512949 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.607468 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.607514 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.607525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.607542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.607555 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.710227 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.710271 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.710286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.710302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.710312 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.812899 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.812937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.812948 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.812964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.812974 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.915151 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.915200 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.915208 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.915222 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:32 crc kubenswrapper[4904]: I1121 13:33:32.915231 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:32Z","lastTransitionTime":"2025-11-21T13:33:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.017777 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.017834 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.017851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.017872 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.017887 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.120092 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.120133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.120145 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.120162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.120172 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.223646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.223697 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.223707 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.223719 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.223728 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.326952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.327058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.327074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.327097 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.327113 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.429595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.429688 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.429714 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.429741 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.429763 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.512221 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:33 crc kubenswrapper[4904]: E1121 13:33:33.512376 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.531600 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.531683 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.531700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.531722 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.531738 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.634355 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.634396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.634420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.634443 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.634458 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.738207 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.738849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.739191 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.739349 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.739565 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.843094 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.843147 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.843163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.843186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.843202 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.946697 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.946964 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.947171 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.947286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:33 crc kubenswrapper[4904]: I1121 13:33:33.947391 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:33Z","lastTransitionTime":"2025-11-21T13:33:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.049785 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.049849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.049873 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.049905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.049926 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.153475 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.153522 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.153533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.153551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.153562 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.256383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.257190 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.257327 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.257442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.257620 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.361838 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.361876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.361887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.361904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.361916 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.464256 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.464302 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.464312 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.464334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.464346 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.512636 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.512764 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.512941 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.513094 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.513276 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.513540 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.567472 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.567522 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.567537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.567555 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.567572 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.581917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.581960 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.581971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.581987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.581998 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.594485 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.598104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.598133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.598142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.598156 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.598165 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.611342 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.614376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.614400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.614409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.614422 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.614431 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.625775 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.628981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.629033 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.629045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.629062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.629072 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.640849 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.644136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.644164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.644173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.644187 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.644195 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.654444 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:34Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:34 crc kubenswrapper[4904]: E1121 13:33:34.654558 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.670881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.670922 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.670930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.670946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.670957 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.773673 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.773716 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.773730 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.773746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.773756 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.876787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.876846 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.876855 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.876869 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.876879 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.979434 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.979474 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.979488 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.979505 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:34 crc kubenswrapper[4904]: I1121 13:33:34.979516 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:34Z","lastTransitionTime":"2025-11-21T13:33:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.082365 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.082833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.082950 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.082983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.082999 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.185938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.185993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.186005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.186021 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.186031 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.289003 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.289066 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.289078 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.289096 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.289105 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.392451 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.392538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.392553 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.392583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.392600 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.494896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.494965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.494977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.494992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.495002 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.512884 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:35 crc kubenswrapper[4904]: E1121 13:33:35.513099 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.598052 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.598091 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.598099 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.598112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.598121 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.701109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.701167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.701178 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.701195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.701207 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.803853 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.803906 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.803920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.803938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.803952 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.906803 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.907136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.907146 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.907163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:35 crc kubenswrapper[4904]: I1121 13:33:35.907173 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:35Z","lastTransitionTime":"2025-11-21T13:33:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.009639 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.009691 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.009702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.009716 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.009726 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.112020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.112089 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.112098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.112112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.112122 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.213901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.213941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.213951 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.213963 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.213971 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.316339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.316396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.316407 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.316438 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.316451 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.418584 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.418714 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.418726 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.418740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.418749 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.513148 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.513201 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:36 crc kubenswrapper[4904]: E1121 13:33:36.513408 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.513091 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:36 crc kubenswrapper[4904]: E1121 13:33:36.513733 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:36 crc kubenswrapper[4904]: E1121 13:33:36.513804 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.521013 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.521040 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.521048 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.521061 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.521075 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.525996 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.537400 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.556185 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:26Z\\\",\\\"message\\\":\\\" 13:33:26.478590 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478594 6895 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1121 13:33:26.478598 6895 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1121 13:33:26.478606 6895 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478613 6895 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nI1121 13:33:26.478617 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nF1121 13:33:26.478616 6895 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.567204 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.578940 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.590546 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.606752 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.616518 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.623455 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.623569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.623587 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.623646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.623725 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.627558 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.643365 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.659592 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.680379 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.690079 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.700290 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.713149 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.723394 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.726502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.726533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.726542 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.726554 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.726593 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.734716 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:36Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.828975 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.829008 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.829016 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.829029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.829038 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.930766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.930836 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.930860 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.930890 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:36 crc kubenswrapper[4904]: I1121 13:33:36.930914 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:36Z","lastTransitionTime":"2025-11-21T13:33:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.032913 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.032955 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.032966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.032983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.032994 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.135338 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.135388 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.135399 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.135418 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.135430 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.238715 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.238780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.238799 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.238824 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.238841 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.341811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.341856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.341865 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.341881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.341890 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.444972 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.445043 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.445052 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.445087 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.445099 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.511989 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:37 crc kubenswrapper[4904]: E1121 13:33:37.512115 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.548058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.548102 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.548114 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.548131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.548144 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.650512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.650556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.650569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.650583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.650594 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.753395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.753423 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.753437 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.753458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.753470 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.855738 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.855801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.855809 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.855839 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.855850 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.957922 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.958003 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.958014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.958027 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:37 crc kubenswrapper[4904]: I1121 13:33:37.958036 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:37Z","lastTransitionTime":"2025-11-21T13:33:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.060160 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.060190 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.060201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.060215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.060227 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.162222 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.162271 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.162320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.162337 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.162348 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.265571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.265629 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.265644 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.265700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.265715 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.368141 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.368207 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.368224 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.368248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.368265 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.471385 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.471446 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.471463 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.471485 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.471502 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.512592 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.513546 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.513592 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:38 crc kubenswrapper[4904]: E1121 13:33:38.513916 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:38 crc kubenswrapper[4904]: E1121 13:33:38.515296 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:38 crc kubenswrapper[4904]: E1121 13:33:38.515565 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.530039 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.574771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.574820 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.574835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.574856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.574874 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.678128 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.678194 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.678214 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.678242 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.678261 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.781564 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.781606 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.781614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.781628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.781638 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.884005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.884062 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.884083 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.884111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.884131 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.986743 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.986958 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.987050 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.987111 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:38 crc kubenswrapper[4904]: I1121 13:33:38.987187 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:38Z","lastTransitionTime":"2025-11-21T13:33:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.089589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.089671 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.089684 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.089700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.089712 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.192257 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.192292 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.192303 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.192361 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.192394 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.294748 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.294816 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.294835 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.294861 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.294879 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.397927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.397966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.397975 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.397992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.398001 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.500920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.500951 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.500961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.500974 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.500983 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.512402 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:39 crc kubenswrapper[4904]: E1121 13:33:39.512538 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.603589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.603917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.604017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.604103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.604189 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.707046 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.707349 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.707480 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.707603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.707764 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.810295 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.810369 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.810384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.810402 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.810414 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.912569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.912602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.912611 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.912624 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:39 crc kubenswrapper[4904]: I1121 13:33:39.912633 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:39Z","lastTransitionTime":"2025-11-21T13:33:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.014966 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.015006 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.015016 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.015029 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.015038 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.117375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.117686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.117772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.117868 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.117951 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.220814 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.220919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.220931 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.220946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.220956 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.322541 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.322577 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.322585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.322599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.322608 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.425692 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.425737 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.425804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.425822 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.425843 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.512032 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:40 crc kubenswrapper[4904]: E1121 13:33:40.512156 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.512304 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:40 crc kubenswrapper[4904]: E1121 13:33:40.512351 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.512926 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:40 crc kubenswrapper[4904]: E1121 13:33:40.513103 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.513434 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:33:40 crc kubenswrapper[4904]: E1121 13:33:40.513728 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.528079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.528110 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.528120 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.528133 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.528143 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.631801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.631881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.631907 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.631938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.631960 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.734856 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.734897 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.734912 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.734928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.734942 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.836881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.836930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.836945 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.836965 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.836979 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.942422 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.942503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.942530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.942560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:40 crc kubenswrapper[4904]: I1121 13:33:40.942691 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:40Z","lastTransitionTime":"2025-11-21T13:33:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.045263 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.045304 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.045312 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.045326 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.045337 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.148562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.148612 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.148628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.148647 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.148686 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.252505 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.252545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.252556 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.252571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.252582 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.356079 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.356126 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.356137 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.356153 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.356169 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.458904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.458941 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.458952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.458968 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.458979 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.512866 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:41 crc kubenswrapper[4904]: E1121 13:33:41.513000 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.561159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.561234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.561248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.561270 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.561332 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.664061 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.664124 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.664138 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.664160 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.664175 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.766718 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.766744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.766752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.766765 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.766773 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.869275 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.869321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.869329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.869347 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.869356 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.971851 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.971898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.971908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.971925 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:41 crc kubenswrapper[4904]: I1121 13:33:41.971937 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:41Z","lastTransitionTime":"2025-11-21T13:33:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.074771 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.074816 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.074827 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.074842 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.074853 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.177076 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.177118 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.177130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.177145 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.177157 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.279744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.279825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.279837 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.279852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.279866 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.381994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.382036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.382044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.382060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.382070 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.484685 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.484727 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.484739 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.484755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.484766 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.512478 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.512552 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.512574 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:42 crc kubenswrapper[4904]: E1121 13:33:42.512619 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:42 crc kubenswrapper[4904]: E1121 13:33:42.512697 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:42 crc kubenswrapper[4904]: E1121 13:33:42.512821 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.586935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.586981 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.586993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.587014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.587026 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.689503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.689537 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.689546 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.689562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.689572 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.792063 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.792116 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.792135 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.792152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.792162 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.894890 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.894935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.894967 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.894983 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.894998 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.997946 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.997993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.998011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.998044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:42 crc kubenswrapper[4904]: I1121 13:33:42.998060 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:42Z","lastTransitionTime":"2025-11-21T13:33:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.100142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.100181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.100193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.100210 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.100220 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.202720 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.202767 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.202780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.202800 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.202813 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.304740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.304786 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.304796 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.304811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.304820 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.407690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.407741 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.407755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.407772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.407787 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.511197 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.511262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.511280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.511306 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.511325 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.512451 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:43 crc kubenswrapper[4904]: E1121 13:33:43.512566 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.613849 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.613897 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.613910 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.613927 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.613938 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.716381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.716763 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.716859 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.716933 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.716991 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.820703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.821048 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.821232 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.821399 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.821534 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.940957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.941209 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.941218 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.941231 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:43 crc kubenswrapper[4904]: I1121 13:33:43.941239 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:43Z","lastTransitionTime":"2025-11-21T13:33:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.044552 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.044628 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.044647 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.044700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.044718 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.148323 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.148756 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.149028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.149271 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.149557 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.252874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.252912 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.252921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.252934 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.252943 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.355045 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.355092 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.355108 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.355128 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.355144 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.458597 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.459098 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.459260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.459430 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.459582 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.512058 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.512258 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.512390 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.512763 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.512945 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.513343 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.526533 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.562370 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.562420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.562431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.562481 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.562501 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.664613 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.664912 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.665005 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.665105 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.665198 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.691321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.691574 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.691837 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.691944 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.692042 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.704034 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.709622 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.709682 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.709694 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.709713 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.709732 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.724847 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.728954 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.729135 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.729257 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.729356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.729447 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.741788 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.745294 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.745329 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.745341 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.745357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.745369 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.759827 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.763137 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.763173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.763182 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.763194 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.763203 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.773594 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:44Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:44 crc kubenswrapper[4904]: E1121 13:33:44.773858 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.775361 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.775391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.775403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.775419 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.775430 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.877435 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.877496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.877515 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.877539 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.877560 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.979843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.979889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.979901 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.979919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:44 crc kubenswrapper[4904]: I1121 13:33:44.979931 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:44Z","lastTransitionTime":"2025-11-21T13:33:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.082378 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.082617 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.082703 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.082815 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.083035 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.186576 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.186909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.186994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.187084 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.187217 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.290780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.290825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.290836 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.290852 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.290865 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.392746 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.392781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.392790 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.392804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.392815 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.496487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.496534 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.496545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.496562 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.496574 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.512941 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:45 crc kubenswrapper[4904]: E1121 13:33:45.513447 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.599093 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.599135 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.599144 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.599162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.599171 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.701526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.701566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.701583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.701603 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.701615 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.803889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.803932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.803942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.803958 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.803970 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.906843 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.906905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.906928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.906957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:45 crc kubenswrapper[4904]: I1121 13:33:45.906978 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:45Z","lastTransitionTime":"2025-11-21T13:33:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.008888 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.008961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.008977 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.008993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.009027 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.112247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.112282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.112292 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.112308 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.112320 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.214838 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.214881 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.214889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.214903 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.214911 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.317351 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.317390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.317400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.317416 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.317425 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.420161 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.420193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.420203 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.420245 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.420254 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.512425 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:46 crc kubenswrapper[4904]: E1121 13:33:46.512543 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.512753 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.512440 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:46 crc kubenswrapper[4904]: E1121 13:33:46.512842 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:46 crc kubenswrapper[4904]: E1121 13:33:46.513011 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.522212 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.522270 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.522282 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.522297 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.522308 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.527764 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.537762 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.549087 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.557874 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c88e12f-3fcc-41e3-aeee-b93a07e7386d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://246881532f6f0540cde9ad7a3f020d0fd916146c008899e3b389c8a1aa34dbe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2025cc01f592f48daaa464e03d0255c574290883d982e98de7ccb0a80d84e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2025cc01f592f48daaa464e03d0255c574290883d982e98de7ccb0a80d84e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.567647 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.577844 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.601583 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:26Z\\\",\\\"message\\\":\\\" 13:33:26.478590 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478594 6895 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1121 13:33:26.478598 6895 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1121 13:33:26.478606 6895 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478613 6895 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nI1121 13:33:26.478617 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nF1121 13:33:26.478616 6895 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.614176 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.624540 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.624701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.624739 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.624756 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.624774 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.624786 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.640474 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.652069 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.660805 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.669702 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.687883 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e884ad-4458-439a-933b-4733a84cac65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ec140d99cadab514f017998d99613211866cf4549be3bd65b9a750e2231028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e917305d968379a7aef26d66a97827ab5d18d72ffaa322cd5c7775ccc43514f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f541cf5f16c370199597c526b0ff3917b07fc8bf93165506f100058760b62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a9380aa33af3709989cedab0768db9be9ba4489ee5ee96fa485540252bb201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14b4d4a83bf404d88bbe9c05c1c7aaa784e51e4398af646eb88cbeab433ea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbed200e0db2350c399be7d45cdbc50afafe154fbe9b14bb29affaa78a7719c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbed200e0db2350c399be7d45cdbc50afafe154fbe9b14bb29affaa78a7719c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc6e6214811f3940f91cbbd75dfa881e1cce41f4a1bdf876aceca842cde4994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fc6e6214811f3940f91cbbd75dfa881e1cce41f4a1bdf876aceca842cde4994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ba9ab8b7f02cd0e07d6f3e33b132de3b6318dc1a33d31ebab969b6f0c59d89ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9ab8b7f02cd0e07d6f3e33b132de3b6318dc1a33d31ebab969b6f0c59d89ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.698294 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.709037 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.718104 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.726910 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.726940 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.726952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.726967 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.726978 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.729168 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.737037 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:46Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.829405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.829440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.829452 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.829467 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.829479 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.931824 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.931858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.931869 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.931883 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:46 crc kubenswrapper[4904]: I1121 13:33:46.931893 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:46Z","lastTransitionTime":"2025-11-21T13:33:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.034724 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.034768 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.034780 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.034804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.034816 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.137039 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.137087 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.137102 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.137121 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.137135 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.239845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.239887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.239896 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.239910 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.239920 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.342122 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.342163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.342173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.342188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.342199 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.444364 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.444403 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.444412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.444426 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.444435 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.512767 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:47 crc kubenswrapper[4904]: E1121 13:33:47.512959 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.546634 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.546701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.546712 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.546726 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.546739 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.648873 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.648934 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.648942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.648957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.648967 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.750997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.751212 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.751227 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.751243 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.751255 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.853383 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.853420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.853432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.853473 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.853483 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.955433 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.955470 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.955478 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.955490 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:47 crc kubenswrapper[4904]: I1121 13:33:47.955499 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:47Z","lastTransitionTime":"2025-11-21T13:33:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.057898 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.057952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.057961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.057973 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.057982 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.160051 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.160101 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.160112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.160130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.160144 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.262538 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.262575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.262582 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.262595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.262605 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.364959 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.365007 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.365019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.365036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.365048 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.467311 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.467346 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.467354 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.467368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.467386 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.512804 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.512826 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.512863 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:48 crc kubenswrapper[4904]: E1121 13:33:48.512918 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:48 crc kubenswrapper[4904]: E1121 13:33:48.512991 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:48 crc kubenswrapper[4904]: E1121 13:33:48.513049 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.569992 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.570037 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.570054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.570074 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.570091 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.672877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.672930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.672943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.672961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.672973 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.776502 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.776573 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.776586 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.776602 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.776614 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.879425 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.879469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.879479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.879491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.879500 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.981711 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.981773 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.981787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.981806 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:48 crc kubenswrapper[4904]: I1121 13:33:48.981835 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:48Z","lastTransitionTime":"2025-11-21T13:33:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.085107 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.085142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.085150 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.085164 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.085173 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.187462 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.187730 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.187836 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.187909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.187977 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.290217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.290251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.290259 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.290277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.290288 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.392618 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.392690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.392702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.392766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.392777 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.495812 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.495874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.495890 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.495916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.495933 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.512609 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:49 crc kubenswrapper[4904]: E1121 13:33:49.512996 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.599267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.599301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.599310 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.599324 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.599333 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.702811 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.702858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.702867 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.702882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.702891 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.805534 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.805599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.805616 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.805640 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.805685 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.913038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.913085 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.913216 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.913236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:49 crc kubenswrapper[4904]: I1121 13:33:49.913273 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:49Z","lastTransitionTime":"2025-11-21T13:33:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.018167 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.018251 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.018267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.018285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.018297 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.121011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.121051 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.121060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.121078 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.121087 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.224477 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.224550 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.224575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.224609 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.224632 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.327442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.327507 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.327525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.327544 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.327557 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.429560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.429614 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.429626 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.429643 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.429677 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.512352 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.512669 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.512636 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:50 crc kubenswrapper[4904]: E1121 13:33:50.512928 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:50 crc kubenswrapper[4904]: E1121 13:33:50.513156 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:50 crc kubenswrapper[4904]: E1121 13:33:50.513234 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.533336 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.533418 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.533431 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.533449 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.533464 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.636828 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.636921 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.636953 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.636993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.637020 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.740146 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.740216 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.740236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.740259 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.740277 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.843384 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.843474 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.843494 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.843526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.843551 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.947700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.947779 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.947790 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.947810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:50 crc kubenswrapper[4904]: I1121 13:33:50.947821 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:50Z","lastTransitionTime":"2025-11-21T13:33:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.050358 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.050429 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.050440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.050458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.050470 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.153006 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.153077 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.153090 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.153130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.153144 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.256185 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.256265 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.256279 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.256326 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.256340 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.359644 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.359740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.359759 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.359784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.359803 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.462246 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.462280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.462288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.462322 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.462331 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.512908 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:51 crc kubenswrapper[4904]: E1121 13:33:51.513169 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.531285 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:51 crc kubenswrapper[4904]: E1121 13:33:51.531489 4904 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:51 crc kubenswrapper[4904]: E1121 13:33:51.531609 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs podName:7c038482-babe-44bf-a8ff-89415347e81f nodeName:}" failed. No retries permitted until 2025-11-21 13:34:55.53158136 +0000 UTC m=+169.653113912 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs") pod "network-metrics-daemon-mx57c" (UID: "7c038482-babe-44bf-a8ff-89415347e81f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.565930 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.565987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.565999 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.566019 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.566032 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.668458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.668519 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.668530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.668550 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.668564 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.771284 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.771333 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.771347 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.771363 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.771375 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.873506 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.873558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.873571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.873589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.873600 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.975957 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.976013 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.976022 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.976034 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:51 crc kubenswrapper[4904]: I1121 13:33:51.976042 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:51Z","lastTransitionTime":"2025-11-21T13:33:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.078128 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.078166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.078174 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.078188 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.078199 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.180597 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.180649 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.180685 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.180700 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.180710 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.283903 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.283987 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.284011 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.284044 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.284065 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.386469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.386505 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.386515 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.386532 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.386544 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.488784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.488858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.488870 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.488909 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.488925 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.512488 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.512644 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.512768 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:52 crc kubenswrapper[4904]: E1121 13:33:52.512870 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:52 crc kubenswrapper[4904]: E1121 13:33:52.512960 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:52 crc kubenswrapper[4904]: E1121 13:33:52.513023 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.591463 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.591493 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.591501 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.591512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.591521 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.694280 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.694320 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.694335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.694353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.694363 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.801312 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.801356 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.801367 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.801380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.801389 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.904629 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.904676 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.904686 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.904698 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:52 crc kubenswrapper[4904]: I1121 13:33:52.904706 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:52Z","lastTransitionTime":"2025-11-21T13:33:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.007314 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.007362 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.007371 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.007385 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.007395 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.110028 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.110067 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.110075 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.110088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.110097 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.212684 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.212969 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.213100 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.213185 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.213269 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.316192 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.316230 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.316240 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.316253 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.316263 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.418127 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.418217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.418230 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.418248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.418261 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.512509 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:53 crc kubenswrapper[4904]: E1121 13:33:53.512868 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.520425 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.520461 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.520471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.520486 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.520497 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.622998 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.623060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.623078 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.623103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.623122 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.725163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.725219 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.725238 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.725261 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.725279 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.827792 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.827858 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.827876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.827900 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.827917 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.930636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.930712 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.930733 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.930756 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:53 crc kubenswrapper[4904]: I1121 13:33:53.930811 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:53Z","lastTransitionTime":"2025-11-21T13:33:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.033215 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.033353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.033380 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.033411 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.033433 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.136065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.136131 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.136142 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.136159 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.136171 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.238642 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.238733 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.238744 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.238761 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.238773 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.341509 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.341575 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.341593 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.341615 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.341634 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.445009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.445058 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.445067 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.445084 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.445095 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.512917 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.513068 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:54 crc kubenswrapper[4904]: E1121 13:33:54.513215 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.513243 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:54 crc kubenswrapper[4904]: E1121 13:33:54.513885 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:54 crc kubenswrapper[4904]: E1121 13:33:54.514157 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.514313 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:33:54 crc kubenswrapper[4904]: E1121 13:33:54.514517 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.547949 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.547994 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.548003 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.548020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.548032 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.651495 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.651566 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.651583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.651605 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.651620 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.754272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.754357 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.754369 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.754386 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.754397 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.857589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.857695 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.857709 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.857752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.857764 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.960580 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.960623 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.960636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.960674 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:54 crc kubenswrapper[4904]: I1121 13:33:54.960686 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:54Z","lastTransitionTime":"2025-11-21T13:33:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.050601 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.050636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.050644 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.050676 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.050687 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.061879 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.064904 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.064934 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.064943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.064956 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.064966 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.075162 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.078183 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.078211 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.078220 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.078234 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.078243 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.087484 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.091396 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.091420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.091428 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.091440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.091449 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.101812 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.105886 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.105928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.105939 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.105956 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.105967 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.117698 4904 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a779a9ca-4efd-4ba5-b2c5-671da2b6633b\\\",\\\"systemUUID\\\":\\\"e1db4033-eba5-4a2b-9bc8-5ae38770be76\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:55Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.117862 4904 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.119646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.119690 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.119721 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.119742 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.119761 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.221711 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.221762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.221773 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.221787 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.221797 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.324646 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.324724 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.324738 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.324755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.324769 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.427236 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.427303 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.427317 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.427342 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.427359 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.512375 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:55 crc kubenswrapper[4904]: E1121 13:33:55.512600 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.530874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.530932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.530969 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.530995 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.531008 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.633258 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.633307 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.633322 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.633339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.633378 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.736344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.736410 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.736434 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.736454 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.736467 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.839195 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.839258 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.839268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.839288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.839300 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.942009 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.942054 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.942068 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.942087 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:55 crc kubenswrapper[4904]: I1121 13:33:55.942099 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:55Z","lastTransitionTime":"2025-11-21T13:33:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.044848 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.044903 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.044918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.044937 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.044950 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.148508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.148558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.148571 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.148591 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.148608 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.250832 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.250866 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.250874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.250887 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.250896 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.353452 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.353503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.353517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.353533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.353542 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.455747 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.455784 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.455794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.455810 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.455821 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.512804 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.512867 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.512875 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:56 crc kubenswrapper[4904]: E1121 13:33:56.512934 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:56 crc kubenswrapper[4904]: E1121 13:33:56.513147 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:56 crc kubenswrapper[4904]: E1121 13:33:56.513167 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.523958 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c88e12f-3fcc-41e3-aeee-b93a07e7386d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://246881532f6f0540cde9ad7a3f020d0fd916146c008899e3b389c8a1aa34dbe0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2025cc01f592f48daaa464e03d0255c574290883d982e98de7ccb0a80d84e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2025cc01f592f48daaa464e03d0255c574290883d982e98de7ccb0a80d84e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.535686 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.550986 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://caff2edf997f221598e518710baf13eea660b4a6abdfcf9af87e0a683f255a57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2320f73d84684b98a22b18bf40e3f169d3b9338c2846b6afee2892fb30dd2ebe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.558847 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.558947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.559017 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.559051 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.559152 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.577066 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"349c3b8f-5311-4171-ade5-ce7db3d118ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:26Z\\\",\\\"message\\\":\\\" 13:33:26.478590 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478594 6895 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1121 13:33:26.478598 6895 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1121 13:33:26.478606 6895 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1121 13:33:26.478613 6895 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nI1121 13:33:26.478617 6895 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984\\\\nF1121 13:33:26.478616 6895 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:33:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-txkm2_openshift-ovn-kubernetes(349c3b8f-5311-4171-ade5-ce7db3d118ad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-658bx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txkm2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.591631 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2c2502-c8cc-496a-878b-ba2ac25add9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://420e75810ec14fb8731ebf5bc3923916d681a65964b62419b54f72a61cb4aea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b82bd1063823ea02e8ba47d5426862618c71434d559c70d6b3465cc2b0073aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ec824d0f23953e873661d89ae23b82f39a6f7baa760bb72f47212c0259413fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55b922026b31842ab02662ec59deec6de878f6fe1d582b286231d541c08930a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.605108 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:29Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f01f439c86cf8923b647565dcec2d2b385d75adeb69c1cda556b747d00bdd9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.620110 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84cc750845f3fd5efc92afa863fa360117394f4f6a8520fcebd723669d1d808e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.630845 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96e1548b-c40d-450b-a2f1-51e56c467178\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e2b95f646bdcc50ba8d7f9fbfdbbb75f9f69ed0785374bcb05a1e687553210\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8hwdw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xb8tn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.643619 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mx57c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7c038482-babe-44bf-a8ff-89415347e81f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z47gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mx57c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.661335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.661375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.661387 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.661404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.661416 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.668210 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97e884ad-4458-439a-933b-4733a84cac65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5ec140d99cadab514f017998d99613211866cf4549be3bd65b9a750e2231028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e917305d968379a7aef26d66a97827ab5d18d72ffaa322cd5c7775ccc43514f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f541cf5f16c370199597c526b0ff3917b07fc8bf93165506f100058760b62c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a9380aa33af3709989cedab0768db9be9ba4489ee5ee96fa485540252bb201\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e14b4d4a83bf404d88bbe9c05c1c7aaa784e51e4398af646eb88cbeab433ea3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbed200e0db2350c399be7d45cdbc50afafe154fbe9b14bb29affaa78a7719c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbed200e0db2350c399be7d45cdbc50afafe154fbe9b14bb29affaa78a7719c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fc6e6214811f3940f91cbbd75dfa881e1cce41f4a1bdf876aceca842cde4994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fc6e6214811f3940f91cbbd75dfa881e1cce41f4a1bdf876aceca842cde4994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ba9ab8b7f02cd0e07d6f3e33b132de3b6318dc1a33d31ebab969b6f0c59d89ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba9ab8b7f02cd0e07d6f3e33b132de3b6318dc1a33d31ebab969b6f0c59d89ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.681063 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fdc5a43-3718-4a71-9e3d-7196bc88fef6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7695d325c2d85a2ed1e97b005d8b62c63a4eb279b2052b96f33ee52080aede\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2df0e2f0dc7fb109db3285fc06ebd26371c0e7549ec520d050a906e46a937da0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3512393b60a42f6467e740fc5eaebef9b0842a8311354151b6bc0d42c60136f9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0109c95ce93684028bbee2c3fbdbeaeb4747e6245e55be9dd8f2320626620806\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.694450 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.705032 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.723185 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"90d7f33d-b498-4549-8c92-9b614313b06f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e7fc10bd6605830161f4341941339d01895d808224f293b9dc59a74001d0fcf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e2cc27442f72fc355340d6f4c12f7f5796f015c456f6294ae1682fd6e05b889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ee90ca1d9f9d56fd30ea66215a421e123b15a9370b7338f554861c9921d6529\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20ea2e8f884ce510d867b9def6b337dfa21ec2d60be17f731e9793e6b1289bb2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66daae08ff2fb163e6e14633211733303d874a4d37a0f4b94a2088b38faf366b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06a12d4a7f6ff6fc5d1646f8ca4bcf711b67981e9d8713da9ad5a667c6a232a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3fc5808d4912df613f6538e37f138d54719c39f6f2d926dbfc2d536f6e49659\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsqn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xgf6p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.733908 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bf22h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1d9e1501-945f-4229-baa9-b86acd98cb04\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6aff78d8481ee9e9639afc4096cd2477b80b2813189b52b9f0275f31f8b44971\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2sg2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bf22h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.746392 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6b684a-caa5-43e4-aba0-e64fd2090b7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aed9d698d1a412c4ef49ac02008dfb6efdd9abce2510b243bf4d201704d26f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7625d102932616c9e8d06ac135caf7533d6f3712c5706b8259eee6445f5aa6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89k5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:46Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pr984\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.760629 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48338e32-b69d-4fea-8f49-84f673b80640\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8e42b927d9cee392f915c02f2e655e5c1ef914d2ee48c375ebe22b6b1119a5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0afcde5165f327667f87d2b51b498030c79885f537d83cd07261053b2fbe9d7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b16932368dee6bffce4e4fec055def04241bd467ff5727d8715d9df9f780b79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ea4c774efa4703d2203e19b1ad979694107ebcaf67164a7c41009fbb4cd5d4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://928167e131da45318d2bb8ad0dd3b335217ab875e95362b53da7ffb8f8c21f11\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-21T13:32:26Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1121 13:32:10.970691 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1121 13:32:10.975406 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1356871736/tls.crt::/tmp/serving-cert-1356871736/tls.key\\\\\\\"\\\\nI1121 13:32:26.350546 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1121 13:32:26.358457 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1121 13:32:26.358511 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1121 13:32:26.358548 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1121 13:32:26.358559 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1121 13:32:26.367326 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1121 13:32:26.367355 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367360 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1121 13:32:26.367364 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1121 13:32:26.367367 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1121 13:32:26.367370 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1121 13:32:26.367373 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1121 13:32:26.367366 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1121 13:32:26.370474 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b07f873b09a83ffae66854dfd6ab1e33d692839a1444a36c1962f2a61c9d294\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ec2bcf8eba42b066715e2fedf5c0ffe1295cefca4e73c8ce00afb233ede51b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-21T13:32:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:06Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.763794 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.763886 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.763908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.763942 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.763967 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.771815 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4hpll" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"576c335c-cce7-461f-9308-546814064708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a772394a2120655bf7f4f47fac6ef1c50cd03bf1ba441895548d79f722f3b5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:32:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq9hc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4hpll\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.788414 4904 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kgngm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:32:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-21T13:33:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-21T13:33:21Z\\\",\\\"message\\\":\\\"2025-11-21T13:32:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa\\\\n2025-11-21T13:32:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_02f8e7e1-d45d-448b-bffa-ab833c0f70aa to /host/opt/cni/bin/\\\\n2025-11-21T13:32:36Z [verbose] multus-daemon started\\\\n2025-11-21T13:32:36Z [verbose] Readiness Indicator file check\\\\n2025-11-21T13:33:21Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-21T13:32:34Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-21T13:33:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8mrzc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-21T13:32:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kgngm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-21T13:33:56Z is after 2025-08-24T17:21:41Z" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.866935 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.867014 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.867040 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.867070 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.867091 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.970319 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.970409 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.970432 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.970464 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:56 crc kubenswrapper[4904]: I1121 13:33:56.970505 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:56Z","lastTransitionTime":"2025-11-21T13:33:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.074024 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.074112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.074132 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.074162 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.074184 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.177420 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.177458 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.177469 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.177487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.177500 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.279976 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.280032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.280048 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.280070 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.280082 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.382484 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.382521 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.382533 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.382551 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.382561 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.484421 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.484466 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.484478 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.484495 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.484509 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.512932 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:57 crc kubenswrapper[4904]: E1121 13:33:57.513063 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.587175 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.587258 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.587278 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.587301 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.587318 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.690375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.690418 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.690442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.690457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.690466 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.793331 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.793381 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.793395 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.793414 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.793428 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.896525 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.896560 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.896569 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.896583 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.896592 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.998260 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.998309 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.998321 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.998334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:57 crc kubenswrapper[4904]: I1121 13:33:57.998346 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:57Z","lastTransitionTime":"2025-11-21T13:33:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.100880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.100917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.100928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.100943 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.100954 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.203221 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.203250 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.203259 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.203272 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.203280 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.305296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.305350 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.305359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.305375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.305387 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.407838 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.407870 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.407880 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.407892 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.407903 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.509938 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.509997 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.510015 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.510036 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.510053 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.512575 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.512594 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.512637 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:33:58 crc kubenswrapper[4904]: E1121 13:33:58.513044 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:33:58 crc kubenswrapper[4904]: E1121 13:33:58.513187 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:33:58 crc kubenswrapper[4904]: E1121 13:33:58.513374 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.612248 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.612285 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.612297 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.612313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.612325 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.714375 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.714406 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.714414 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.714427 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.714435 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.816457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.816511 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.816520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.816535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.816546 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.918813 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.918865 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.918876 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.918893 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:58 crc kubenswrapper[4904]: I1121 13:33:58.918907 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:58Z","lastTransitionTime":"2025-11-21T13:33:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.021702 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.022001 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.022110 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.022201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.022279 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.124589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.124647 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.124694 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.124715 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.124736 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.227020 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.227273 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.227496 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.227595 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.227740 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.330137 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.330370 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.330465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.330535 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.330593 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.432636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.432727 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.432740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.432757 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.432772 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.512115 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:33:59 crc kubenswrapper[4904]: E1121 13:33:59.512243 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.535630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.535900 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.536038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.536186 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.536314 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.638878 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.638924 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.638933 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.638944 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.638957 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.741104 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.741149 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.741158 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.741173 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.741183 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.843948 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.844237 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.844345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.844436 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.844519 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.948166 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.948277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.948308 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.948339 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:33:59 crc kubenswrapper[4904]: I1121 13:33:59.948364 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:33:59Z","lastTransitionTime":"2025-11-21T13:33:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.055517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.055952 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.056038 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.056115 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.056177 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.158405 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.158725 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.158826 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.158922 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.159015 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.261866 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.261905 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.261915 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.261929 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.261941 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.365454 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.365795 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.365884 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.365961 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.366039 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.469296 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.469345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.469353 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.469368 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.469377 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.512492 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.512557 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.512709 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:00 crc kubenswrapper[4904]: E1121 13:34:00.513272 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:00 crc kubenswrapper[4904]: E1121 13:34:00.513243 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:00 crc kubenswrapper[4904]: E1121 13:34:00.513381 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.572821 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.572877 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.572889 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.572914 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.572927 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.676125 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.676181 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.676193 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.676212 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.676222 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.779480 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.779530 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.779539 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.779558 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.779567 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.883736 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.883781 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.883791 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.883807 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.883818 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.986916 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.986991 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.987010 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.987041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:00 crc kubenswrapper[4904]: I1121 13:34:00.987064 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:00Z","lastTransitionTime":"2025-11-21T13:34:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.090031 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.090112 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.090136 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.090168 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.090193 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.193376 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.193438 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.193457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.193483 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.193501 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.297163 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.297240 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.297254 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.297277 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.297291 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.400871 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.400918 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.400928 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.400947 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.400958 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.504331 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.504390 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.504404 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.504425 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.504438 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.512968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:01 crc kubenswrapper[4904]: E1121 13:34:01.513227 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.606801 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.606882 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.606908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.606948 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.606972 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.711027 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.711103 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.711123 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.711151 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.711168 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.814618 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.814725 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.814750 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.814783 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.814809 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.918804 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.921262 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.921520 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.921752 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:01 crc kubenswrapper[4904]: I1121 13:34:01.922559 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:01Z","lastTransitionTime":"2025-11-21T13:34:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.030706 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.030755 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.030772 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.030800 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.030817 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.133857 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.133919 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.133932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.133950 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.133962 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.237874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.237920 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.237929 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.237944 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.237953 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.341449 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.341497 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.341508 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.341526 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.341537 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.444258 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.444308 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.444319 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.444334 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.444343 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.513870 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:02 crc kubenswrapper[4904]: E1121 13:34:02.514065 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.513942 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.513902 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:02 crc kubenswrapper[4904]: E1121 13:34:02.514191 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:02 crc kubenswrapper[4904]: E1121 13:34:02.514381 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.547762 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.547812 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.547825 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.547845 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.547858 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.650636 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.650701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.650713 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.650727 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.650738 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.753184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.753226 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.753267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.753286 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.753295 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.855993 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.856139 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.856155 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.856172 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.856201 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.957874 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.957908 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.957917 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.957932 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:02 crc kubenswrapper[4904]: I1121 13:34:02.957941 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:02Z","lastTransitionTime":"2025-11-21T13:34:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.060585 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.060627 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.060637 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.060667 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.060678 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.162032 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.162065 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.162073 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.162084 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.162093 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.265374 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.265442 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.265457 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.265479 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.265495 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.368335 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.368412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.368424 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.368440 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.368476 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.471531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.471588 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.471601 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.471621 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.471637 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.512289 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:03 crc kubenswrapper[4904]: E1121 13:34:03.512478 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.575072 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.575187 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.575201 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.575217 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.575227 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.678411 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.678465 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.678475 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.678492 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.678503 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.783109 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.783203 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.783230 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.783267 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.783292 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.886418 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.886475 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.886487 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.886503 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.886512 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.990146 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.990707 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.990833 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.990991 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:03 crc kubenswrapper[4904]: I1121 13:34:03.991109 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:03Z","lastTransitionTime":"2025-11-21T13:34:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.095247 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.095313 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.095326 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.095345 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.095355 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.199579 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.199630 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.199644 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.199681 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.199693 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.302517 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.302580 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.302600 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.302625 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.302648 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.405599 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.405666 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.405680 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.405701 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.405715 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.509204 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.509255 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.509268 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.509288 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.509299 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.512122 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.512176 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.512195 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:04 crc kubenswrapper[4904]: E1121 13:34:04.512332 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:04 crc kubenswrapper[4904]: E1121 13:34:04.512501 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:04 crc kubenswrapper[4904]: E1121 13:34:04.512629 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.613400 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.613491 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.613512 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.613545 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.613569 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.717624 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.717723 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.717740 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.717766 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.717786 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.820344 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.820412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.820423 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.820471 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.820483 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.924412 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.924531 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.924552 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.924589 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:04 crc kubenswrapper[4904]: I1121 13:34:04.924610 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:04Z","lastTransitionTime":"2025-11-21T13:34:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.028126 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.028218 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.028239 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.028269 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.028304 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:05Z","lastTransitionTime":"2025-11-21T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.131233 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.131340 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.131359 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.131391 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.131416 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:05Z","lastTransitionTime":"2025-11-21T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.235950 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.236130 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.236152 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.236184 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.236206 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:05Z","lastTransitionTime":"2025-11-21T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.237971 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.238041 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.238060 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.238088 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.238105 4904 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-21T13:34:05Z","lastTransitionTime":"2025-11-21T13:34:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.318988 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2"] Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.319443 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.323008 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.323607 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.326194 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.326603 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.348938 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=21.348916872 podStartE2EDuration="21.348916872s" podCreationTimestamp="2025-11-21 13:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.348748207 +0000 UTC m=+119.470280779" watchObservedRunningTime="2025-11-21 13:34:05.348916872 +0000 UTC m=+119.470449424" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.365743 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.365722262 podStartE2EDuration="1m32.365722262s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.365346042 +0000 UTC m=+119.486878604" watchObservedRunningTime="2025-11-21 13:34:05.365722262 +0000 UTC m=+119.487254854" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.377967 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f13295ee-8281-4198-996e-9c76936d91ac-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.378118 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13295ee-8281-4198-996e-9c76936d91ac-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.378835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f13295ee-8281-4198-996e-9c76936d91ac-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.378931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f13295ee-8281-4198-996e-9c76936d91ac-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.378955 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f13295ee-8281-4198-996e-9c76936d91ac-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.408251 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xgf6p" podStartSLOduration=93.408235188 podStartE2EDuration="1m33.408235188s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.408189077 +0000 UTC m=+119.529721629" watchObservedRunningTime="2025-11-21 13:34:05.408235188 +0000 UTC m=+119.529767730" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.418935 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bf22h" podStartSLOduration=93.418893229 podStartE2EDuration="1m33.418893229s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.418429177 +0000 UTC m=+119.539961719" watchObservedRunningTime="2025-11-21 13:34:05.418893229 +0000 UTC m=+119.540425771" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.430150 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pr984" podStartSLOduration=92.430114802 podStartE2EDuration="1m32.430114802s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.429428645 +0000 UTC m=+119.550961197" watchObservedRunningTime="2025-11-21 13:34:05.430114802 +0000 UTC m=+119.551647354" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.446125 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=99.446106692 podStartE2EDuration="1m39.446106692s" podCreationTimestamp="2025-11-21 13:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.444998755 +0000 UTC m=+119.566531307" watchObservedRunningTime="2025-11-21 13:34:05.446106692 +0000 UTC m=+119.567639244" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.468231 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4hpll" podStartSLOduration=93.468215121 podStartE2EDuration="1m33.468215121s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.456405903 +0000 UTC m=+119.577938455" watchObservedRunningTime="2025-11-21 13:34:05.468215121 +0000 UTC m=+119.589747673" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.477866 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kgngm" podStartSLOduration=93.477849816 podStartE2EDuration="1m33.477849816s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.46859105 +0000 UTC m=+119.590123612" watchObservedRunningTime="2025-11-21 13:34:05.477849816 +0000 UTC m=+119.599382368" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.478237 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.478234355 podStartE2EDuration="27.478234355s" podCreationTimestamp="2025-11-21 13:33:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.477570649 +0000 UTC m=+119.599103201" watchObservedRunningTime="2025-11-21 13:34:05.478234355 +0000 UTC m=+119.599766897" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.479915 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f13295ee-8281-4198-996e-9c76936d91ac-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.480068 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f13295ee-8281-4198-996e-9c76936d91ac-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.480248 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13295ee-8281-4198-996e-9c76936d91ac-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.481579 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f13295ee-8281-4198-996e-9c76936d91ac-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.481762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f13295ee-8281-4198-996e-9c76936d91ac-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.480202 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f13295ee-8281-4198-996e-9c76936d91ac-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.480028 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f13295ee-8281-4198-996e-9c76936d91ac-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.482458 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f13295ee-8281-4198-996e-9c76936d91ac-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.486865 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13295ee-8281-4198-996e-9c76936d91ac-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.499685 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f13295ee-8281-4198-996e-9c76936d91ac-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kp8p2\" (UID: \"f13295ee-8281-4198-996e-9c76936d91ac\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.512806 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:05 crc kubenswrapper[4904]: E1121 13:34:05.513340 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.552842 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=69.552821624 podStartE2EDuration="1m9.552821624s" podCreationTimestamp="2025-11-21 13:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.541495198 +0000 UTC m=+119.663027750" watchObservedRunningTime="2025-11-21 13:34:05.552821624 +0000 UTC m=+119.674354186" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.578991 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podStartSLOduration=93.578967332 podStartE2EDuration="1m33.578967332s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:05.576919831 +0000 UTC m=+119.698452373" watchObservedRunningTime="2025-11-21 13:34:05.578967332 +0000 UTC m=+119.700499884" Nov 21 13:34:05 crc kubenswrapper[4904]: I1121 13:34:05.641001 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" Nov 21 13:34:05 crc kubenswrapper[4904]: W1121 13:34:05.663494 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf13295ee_8281_4198_996e_9c76936d91ac.slice/crio-c98dbc5e07c1a0a77fdbb0590cecef5bb9f1b880305a4ac7dbb5238d41c18702 WatchSource:0}: Error finding container c98dbc5e07c1a0a77fdbb0590cecef5bb9f1b880305a4ac7dbb5238d41c18702: Status 404 returned error can't find the container with id c98dbc5e07c1a0a77fdbb0590cecef5bb9f1b880305a4ac7dbb5238d41c18702 Nov 21 13:34:06 crc kubenswrapper[4904]: I1121 13:34:06.174790 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" event={"ID":"f13295ee-8281-4198-996e-9c76936d91ac","Type":"ContainerStarted","Data":"5374c6e127cfb42985a33e840207365b45736d0699af82d775555d4a6fbdde1f"} Nov 21 13:34:06 crc kubenswrapper[4904]: I1121 13:34:06.175115 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" event={"ID":"f13295ee-8281-4198-996e-9c76936d91ac","Type":"ContainerStarted","Data":"c98dbc5e07c1a0a77fdbb0590cecef5bb9f1b880305a4ac7dbb5238d41c18702"} Nov 21 13:34:06 crc kubenswrapper[4904]: E1121 13:34:06.464678 4904 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 21 13:34:06 crc kubenswrapper[4904]: I1121 13:34:06.512903 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:06 crc kubenswrapper[4904]: I1121 13:34:06.513373 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:06 crc kubenswrapper[4904]: I1121 13:34:06.514397 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:06 crc kubenswrapper[4904]: E1121 13:34:06.514544 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:06 crc kubenswrapper[4904]: E1121 13:34:06.514771 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:06 crc kubenswrapper[4904]: I1121 13:34:06.514837 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:34:06 crc kubenswrapper[4904]: E1121 13:34:06.514876 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:06 crc kubenswrapper[4904]: E1121 13:34:06.634597 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.180457 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/3.log" Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.184542 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerStarted","Data":"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66"} Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.185258 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.229853 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podStartSLOduration=94.229836589 podStartE2EDuration="1m34.229836589s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:07.228095787 +0000 UTC m=+121.349628349" watchObservedRunningTime="2025-11-21 13:34:07.229836589 +0000 UTC m=+121.351369141" Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.230489 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kp8p2" podStartSLOduration=95.230474024 podStartE2EDuration="1m35.230474024s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:06.188501726 +0000 UTC m=+120.310034288" watchObservedRunningTime="2025-11-21 13:34:07.230474024 +0000 UTC m=+121.352006576" Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.512684 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:07 crc kubenswrapper[4904]: E1121 13:34:07.512822 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:07 crc kubenswrapper[4904]: I1121 13:34:07.690155 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mx57c"] Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.188276 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/1.log" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.188921 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/0.log" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.188956 4904 generic.go:334] "Generic (PLEG): container finished" podID="190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a" containerID="d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2" exitCode=1 Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.189032 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerDied","Data":"d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2"} Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.189081 4904 scope.go:117] "RemoveContainer" containerID="e1acb320f79fc6ca81c06e47878b3011981a133c4779aca0de42bc41e863647d" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.189090 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:08 crc kubenswrapper[4904]: E1121 13:34:08.189444 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.189838 4904 scope.go:117] "RemoveContainer" containerID="d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2" Nov 21 13:34:08 crc kubenswrapper[4904]: E1121 13:34:08.189958 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kgngm_openshift-multus(190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a)\"" pod="openshift-multus/multus-kgngm" podUID="190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.512452 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.512474 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:08 crc kubenswrapper[4904]: E1121 13:34:08.512584 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:08 crc kubenswrapper[4904]: I1121 13:34:08.512605 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:08 crc kubenswrapper[4904]: E1121 13:34:08.512696 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:08 crc kubenswrapper[4904]: E1121 13:34:08.512740 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:09 crc kubenswrapper[4904]: I1121 13:34:09.193551 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/1.log" Nov 21 13:34:09 crc kubenswrapper[4904]: I1121 13:34:09.512057 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:09 crc kubenswrapper[4904]: E1121 13:34:09.512240 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:10 crc kubenswrapper[4904]: I1121 13:34:10.512703 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:10 crc kubenswrapper[4904]: I1121 13:34:10.512726 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:10 crc kubenswrapper[4904]: I1121 13:34:10.512825 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:10 crc kubenswrapper[4904]: E1121 13:34:10.512849 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:10 crc kubenswrapper[4904]: E1121 13:34:10.512932 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:10 crc kubenswrapper[4904]: E1121 13:34:10.513102 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:11 crc kubenswrapper[4904]: I1121 13:34:11.512494 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:11 crc kubenswrapper[4904]: E1121 13:34:11.512627 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:11 crc kubenswrapper[4904]: E1121 13:34:11.635511 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:12 crc kubenswrapper[4904]: I1121 13:34:12.512131 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:12 crc kubenswrapper[4904]: I1121 13:34:12.512223 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:12 crc kubenswrapper[4904]: I1121 13:34:12.512287 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:12 crc kubenswrapper[4904]: E1121 13:34:12.512502 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:12 crc kubenswrapper[4904]: E1121 13:34:12.512607 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:12 crc kubenswrapper[4904]: E1121 13:34:12.512697 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:13 crc kubenswrapper[4904]: I1121 13:34:13.512298 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:13 crc kubenswrapper[4904]: E1121 13:34:13.512433 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:14 crc kubenswrapper[4904]: I1121 13:34:14.512986 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:14 crc kubenswrapper[4904]: I1121 13:34:14.513128 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:14 crc kubenswrapper[4904]: E1121 13:34:14.513300 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:14 crc kubenswrapper[4904]: I1121 13:34:14.513376 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:14 crc kubenswrapper[4904]: E1121 13:34:14.513529 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:14 crc kubenswrapper[4904]: E1121 13:34:14.513754 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:15 crc kubenswrapper[4904]: I1121 13:34:15.511999 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:15 crc kubenswrapper[4904]: E1121 13:34:15.512193 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:16 crc kubenswrapper[4904]: I1121 13:34:16.512357 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:16 crc kubenswrapper[4904]: I1121 13:34:16.512399 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:16 crc kubenswrapper[4904]: I1121 13:34:16.512357 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:16 crc kubenswrapper[4904]: E1121 13:34:16.514368 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:16 crc kubenswrapper[4904]: E1121 13:34:16.514536 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:16 crc kubenswrapper[4904]: E1121 13:34:16.514702 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:16 crc kubenswrapper[4904]: E1121 13:34:16.639971 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:17 crc kubenswrapper[4904]: I1121 13:34:17.512879 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:17 crc kubenswrapper[4904]: E1121 13:34:17.513012 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:18 crc kubenswrapper[4904]: I1121 13:34:18.512507 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:18 crc kubenswrapper[4904]: I1121 13:34:18.512543 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:18 crc kubenswrapper[4904]: I1121 13:34:18.512585 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:18 crc kubenswrapper[4904]: E1121 13:34:18.512766 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:18 crc kubenswrapper[4904]: E1121 13:34:18.512900 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:18 crc kubenswrapper[4904]: E1121 13:34:18.513039 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:19 crc kubenswrapper[4904]: I1121 13:34:19.512803 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:19 crc kubenswrapper[4904]: E1121 13:34:19.513034 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:20 crc kubenswrapper[4904]: I1121 13:34:20.512413 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:20 crc kubenswrapper[4904]: I1121 13:34:20.512467 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:20 crc kubenswrapper[4904]: I1121 13:34:20.512467 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:20 crc kubenswrapper[4904]: E1121 13:34:20.512602 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:20 crc kubenswrapper[4904]: E1121 13:34:20.512700 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:20 crc kubenswrapper[4904]: E1121 13:34:20.512751 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:21 crc kubenswrapper[4904]: I1121 13:34:21.512696 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:21 crc kubenswrapper[4904]: E1121 13:34:21.512924 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:21 crc kubenswrapper[4904]: E1121 13:34:21.641699 4904 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:34:22 crc kubenswrapper[4904]: I1121 13:34:22.513039 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:22 crc kubenswrapper[4904]: I1121 13:34:22.513167 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:22 crc kubenswrapper[4904]: E1121 13:34:22.513248 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:22 crc kubenswrapper[4904]: I1121 13:34:22.513272 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:22 crc kubenswrapper[4904]: E1121 13:34:22.513720 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:22 crc kubenswrapper[4904]: I1121 13:34:22.513851 4904 scope.go:117] "RemoveContainer" containerID="d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2" Nov 21 13:34:22 crc kubenswrapper[4904]: E1121 13:34:22.514465 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:23 crc kubenswrapper[4904]: I1121 13:34:23.246826 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/1.log" Nov 21 13:34:23 crc kubenswrapper[4904]: I1121 13:34:23.247254 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerStarted","Data":"e7bcb85c4309dddf373567192fb1362a6c42fb2260c7c8101594a97cf2d7016d"} Nov 21 13:34:23 crc kubenswrapper[4904]: I1121 13:34:23.512379 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:23 crc kubenswrapper[4904]: E1121 13:34:23.512553 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:24 crc kubenswrapper[4904]: I1121 13:34:24.512519 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:24 crc kubenswrapper[4904]: I1121 13:34:24.512623 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:24 crc kubenswrapper[4904]: I1121 13:34:24.512568 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:24 crc kubenswrapper[4904]: E1121 13:34:24.512881 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:24 crc kubenswrapper[4904]: E1121 13:34:24.513044 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:24 crc kubenswrapper[4904]: E1121 13:34:24.513243 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:25 crc kubenswrapper[4904]: I1121 13:34:25.512890 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:25 crc kubenswrapper[4904]: E1121 13:34:25.513160 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mx57c" podUID="7c038482-babe-44bf-a8ff-89415347e81f" Nov 21 13:34:26 crc kubenswrapper[4904]: I1121 13:34:26.513038 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:26 crc kubenswrapper[4904]: E1121 13:34:26.513310 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 21 13:34:26 crc kubenswrapper[4904]: I1121 13:34:26.513450 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:26 crc kubenswrapper[4904]: E1121 13:34:26.515018 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 21 13:34:26 crc kubenswrapper[4904]: I1121 13:34:26.515089 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:26 crc kubenswrapper[4904]: E1121 13:34:26.515338 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 21 13:34:27 crc kubenswrapper[4904]: I1121 13:34:27.513118 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:27 crc kubenswrapper[4904]: I1121 13:34:27.516142 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 21 13:34:27 crc kubenswrapper[4904]: I1121 13:34:27.516142 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.512575 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.512618 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.512575 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.515345 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.516520 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.516627 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 21 13:34:28 crc kubenswrapper[4904]: I1121 13:34:28.516631 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 21 13:34:32 crc kubenswrapper[4904]: I1121 13:34:32.761057 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.412178 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:34 crc kubenswrapper[4904]: E1121 13:34:34.412501 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:36:36.412480399 +0000 UTC m=+270.534012961 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.513237 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.513805 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.514048 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.514268 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.515565 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.526358 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.527126 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.528296 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.535376 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.543011 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:34 crc kubenswrapper[4904]: I1121 13:34:34.828472 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 21 13:34:34 crc kubenswrapper[4904]: W1121 13:34:34.891439 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-9cecf33690f24a2c6d194fc9966a7b5415c25d6258a91c431a48e787cf1d3ca1 WatchSource:0}: Error finding container 9cecf33690f24a2c6d194fc9966a7b5415c25d6258a91c431a48e787cf1d3ca1: Status 404 returned error can't find the container with id 9cecf33690f24a2c6d194fc9966a7b5415c25d6258a91c431a48e787cf1d3ca1 Nov 21 13:34:35 crc kubenswrapper[4904]: W1121 13:34:35.048273 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d6c9c5829a30f764ecb600c3942e7c564a386551678e087bf868833e77d9a35b WatchSource:0}: Error finding container d6c9c5829a30f764ecb600c3942e7c564a386551678e087bf868833e77d9a35b: Status 404 returned error can't find the container with id d6c9c5829a30f764ecb600c3942e7c564a386551678e087bf868833e77d9a35b Nov 21 13:34:35 crc kubenswrapper[4904]: W1121 13:34:35.050322 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-8c282a1ffe772c0058dfe7eb0a893bc33b4313fc5087307c5c9f53102afadc3a WatchSource:0}: Error finding container 8c282a1ffe772c0058dfe7eb0a893bc33b4313fc5087307c5c9f53102afadc3a: Status 404 returned error can't find the container with id 8c282a1ffe772c0058dfe7eb0a893bc33b4313fc5087307c5c9f53102afadc3a Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.340125 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2e0ef3b5bcbccebe6e329c76ee68dc83bcd08dd1bc4686848e2baf9d95024b61"} Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.340181 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8c282a1ffe772c0058dfe7eb0a893bc33b4313fc5087307c5c9f53102afadc3a"} Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.342084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6f5ce324d2e9453635518dc7bd8e786360a1289c8a088cb1ad0b90949551e251"} Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.342134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9cecf33690f24a2c6d194fc9966a7b5415c25d6258a91c431a48e787cf1d3ca1"} Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.342568 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.344015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d319d1775e4a3b5b21d6ccc95f5539543c01930df953914d9ec4deeacbdd85fa"} Nov 21 13:34:35 crc kubenswrapper[4904]: I1121 13:34:35.344060 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d6c9c5829a30f764ecb600c3942e7c564a386551678e087bf868833e77d9a35b"} Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.082751 4904 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.137337 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wkqr4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.137910 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.140448 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.142241 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.142452 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.142644 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.143262 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.144627 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.147258 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5b75z"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.147950 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9sdsl"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.147985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.148405 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.152060 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7bm6t"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.152730 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.174280 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdh9s"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.174992 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.175454 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.175959 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.183229 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.183636 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200154 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200376 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200510 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200622 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200680 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200777 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.200790 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.201021 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.201640 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.202457 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.202613 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.203255 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.203478 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.203839 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.203902 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.205075 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.209572 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.209776 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.211430 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fb4wt"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.211903 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.212362 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.212355 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213041 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213141 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213237 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213320 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213375 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213445 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213670 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213718 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213789 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213404 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213834 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213891 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.213928 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cfhwx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.214163 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.214731 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.215001 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wwmcx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.215586 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.215972 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.217884 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wkqr4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.219412 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84l77"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.219809 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.220092 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.220366 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.221603 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.225945 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.226109 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.226729 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.226837 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.226990 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.227677 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.227779 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.227873 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.227943 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.228031 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.228109 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.228176 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.230802 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.231301 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.231790 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254015 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254607 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254683 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-image-import-ca\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254712 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-etcd-client\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254738 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254760 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vssml\" (UniqueName: \"kubernetes.io/projected/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-kube-api-access-vssml\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254795 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-audit\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs967\" (UniqueName: \"kubernetes.io/projected/45113c1a-4545-4ec0-a1f7-f387bf548d6f-kube-api-access-fs967\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254875 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-serving-cert\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254895 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-config\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254922 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-etcd-serving-ca\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254945 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254968 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.254985 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45113c1a-4545-4ec0-a1f7-f387bf548d6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255009 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcsm\" (UniqueName: \"kubernetes.io/projected/df1572fa-33e4-4828-b32d-9721b2df142d-kube-api-access-ffcsm\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255029 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255050 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255074 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df1572fa-33e4-4828-b32d-9721b2df142d-audit-dir\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255092 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-audit-policies\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255115 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255138 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/149c997e-8119-469c-afd2-4b0bc403e07a-node-pullsecrets\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255164 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcxd\" (UniqueName: \"kubernetes.io/projected/149c997e-8119-469c-afd2-4b0bc403e07a-kube-api-access-5zcxd\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255183 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255203 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-serving-cert\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255224 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-client-ca\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255250 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/149c997e-8119-469c-afd2-4b0bc403e07a-audit-dir\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255271 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255290 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-encryption-config\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255314 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255349 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255374 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-config\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255395 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.255503 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/45113c1a-4545-4ec0-a1f7-f387bf548d6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.258179 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.261789 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.262895 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.264084 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.289449 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.289600 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.289937 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.290003 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.290108 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.290971 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.290976 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.291472 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.291806 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.292310 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.292557 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.292865 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.294561 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.294808 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.295044 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.295314 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.295545 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.295787 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.297032 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.297171 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.297851 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.296894 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.298969 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.299178 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.299725 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.301849 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-n54bt"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.302409 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.310590 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5tm89"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.311117 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.317391 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.318727 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g96sr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.319305 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.319422 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.319706 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.319831 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.324013 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.324220 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.325942 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.326455 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.326810 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.326809 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.327027 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.327412 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.332818 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.333299 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.335300 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.335968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.338629 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mg9wr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.339578 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.339600 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.339804 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.340239 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.341358 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.341598 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-g69x5"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.341842 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.342304 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.342574 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.342647 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.342738 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.345840 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdh9s"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.353550 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.353813 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.353937 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354155 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354257 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354321 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354371 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354464 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354568 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354605 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354686 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.354997 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.355247 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.355479 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356128 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356242 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356256 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356286 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356762 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356769 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-config\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356791 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356814 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-client\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356914 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/45113c1a-4545-4ec0-a1f7-f387bf548d6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356979 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4781bb1-96ff-48a7-8429-8831b12c9f3b-serving-cert\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.356997 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lnd\" (UniqueName: \"kubernetes.io/projected/8c6f1a5d-0119-4508-a7cd-5cb0eb522d01-kube-api-access-v6lnd\") pod \"downloads-7954f5f757-cfhwx\" (UID: \"8c6f1a5d-0119-4508-a7cd-5cb0eb522d01\") " pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357013 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b29d96e3-7aa4-4626-a245-93ee36f7595f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357032 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-image-import-ca\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357049 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357065 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-etcd-client\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357079 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-audit\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357093 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357111 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357127 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vssml\" (UniqueName: \"kubernetes.io/projected/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-kube-api-access-vssml\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357144 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs967\" (UniqueName: \"kubernetes.io/projected/45113c1a-4545-4ec0-a1f7-f387bf548d6f-kube-api-access-fs967\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357159 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-serving-cert\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357175 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-config\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357190 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b29d96e3-7aa4-4626-a245-93ee36f7595f-images\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357210 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpsb\" (UniqueName: \"kubernetes.io/projected/b4781bb1-96ff-48a7-8429-8831b12c9f3b-kube-api-access-9hpsb\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj94s\" (UniqueName: \"kubernetes.io/projected/cd7b70c9-69ff-4c9b-8ec9-da135b66a909-kube-api-access-cj94s\") pod \"cluster-samples-operator-665b6dd947-tqksx\" (UID: \"cd7b70c9-69ff-4c9b-8ec9-da135b66a909\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357280 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45113c1a-4545-4ec0-a1f7-f387bf548d6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357300 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-etcd-serving-ca\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357317 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357334 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357351 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48bh\" (UniqueName: \"kubernetes.io/projected/cbdb2624-b774-4a00-980e-55364d66743e-kube-api-access-h48bh\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffcsm\" (UniqueName: \"kubernetes.io/projected/df1572fa-33e4-4828-b32d-9721b2df142d-kube-api-access-ffcsm\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357385 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357405 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357421 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-serving-cert\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357438 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-audit-policies\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357455 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df1572fa-33e4-4828-b32d-9721b2df142d-audit-dir\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357472 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11648315-9879-46e9-9aa1-78340652ebfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdj9j\" (UniqueName: \"kubernetes.io/projected/b29d96e3-7aa4-4626-a245-93ee36f7595f-kube-api-access-xdj9j\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357505 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8060832-7d55-42ed-8499-b42b0b3fdc8b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357524 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357539 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rbv\" (UniqueName: \"kubernetes.io/projected/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-kube-api-access-k9rbv\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357555 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/149c997e-8119-469c-afd2-4b0bc403e07a-node-pullsecrets\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357572 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h667\" (UniqueName: \"kubernetes.io/projected/0d221639-40c0-4997-a42c-2e641f5793ab-kube-api-access-4h667\") pod \"dns-operator-744455d44c-wwmcx\" (UID: \"0d221639-40c0-4997-a42c-2e641f5793ab\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357588 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-auth-proxy-config\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357603 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7b70c9-69ff-4c9b-8ec9-da135b66a909-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tqksx\" (UID: \"cd7b70c9-69ff-4c9b-8ec9-da135b66a909\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357620 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcxd\" (UniqueName: \"kubernetes.io/projected/149c997e-8119-469c-afd2-4b0bc403e07a-kube-api-access-5zcxd\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-ca\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357667 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-config\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357685 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357702 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-serving-cert\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357719 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbdb2624-b774-4a00-980e-55364d66743e-serving-cert\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357736 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8060832-7d55-42ed-8499-b42b0b3fdc8b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357751 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-client-ca\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357768 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29d96e3-7aa4-4626-a245-93ee36f7595f-config\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357782 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-service-ca\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357799 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hls6\" (UniqueName: \"kubernetes.io/projected/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-kube-api-access-9hls6\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357813 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11648315-9879-46e9-9aa1-78340652ebfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357829 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbdb2624-b774-4a00-980e-55364d66743e-config\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357845 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/149c997e-8119-469c-afd2-4b0bc403e07a-audit-dir\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357861 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357877 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d221639-40c0-4997-a42c-2e641f5793ab-metrics-tls\") pod \"dns-operator-744455d44c-wwmcx\" (UID: \"0d221639-40c0-4997-a42c-2e641f5793ab\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357890 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-config\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357905 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-client-ca\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-encryption-config\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357935 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357959 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-machine-approver-tls\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357976 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbdb2624-b774-4a00-980e-55364d66743e-trusted-ca\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.357992 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8lwp\" (UniqueName: \"kubernetes.io/projected/d8060832-7d55-42ed-8499-b42b0b3fdc8b-kube-api-access-s8lwp\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.358010 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.358025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-config\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.358042 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmqk\" (UniqueName: \"kubernetes.io/projected/11648315-9879-46e9-9aa1-78340652ebfd-kube-api-access-bzmqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.358526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-config\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.358810 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/45113c1a-4545-4ec0-a1f7-f387bf548d6f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.359053 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.359771 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.359798 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.359935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/149c997e-8119-469c-afd2-4b0bc403e07a-node-pullsecrets\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.360684 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.361169 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.362295 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.365527 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-image-import-ca\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.365592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/149c997e-8119-469c-afd2-4b0bc403e07a-audit-dir\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.367067 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-etcd-serving-ca\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.367525 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-audit\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.370438 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-client-ca\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.394873 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.395387 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.395432 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45113c1a-4545-4ec0-a1f7-f387bf548d6f-serving-cert\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.395748 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.395775 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-config\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.396485 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-serving-cert\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.396564 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.396880 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-encryption-config\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.397797 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.397830 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.398383 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.398722 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-serving-cert\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.398835 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.398936 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df1572fa-33e4-4828-b32d-9721b2df142d-audit-dir\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.399160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/149c997e-8119-469c-afd2-4b0bc403e07a-etcd-client\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.400185 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.400727 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.400790 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.405143 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psv9p"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.406070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.406449 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/149c997e-8119-469c-afd2-4b0bc403e07a-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.414240 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.415119 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-audit-policies\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.427062 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.427718 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.428552 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.428719 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.428941 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7bm6t"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.429973 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.430030 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.431347 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.431974 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.432896 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.433160 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w4c85"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.433890 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.437546 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bff4w"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.438536 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.439016 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.440547 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.441193 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k4gzk"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.441329 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.441988 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wwmcx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.442057 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.442647 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fb4wt"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.445556 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9sdsl"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.446957 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.447114 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84l77"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.448132 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.453558 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cfhwx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.455766 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.457780 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.458495 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c586f4f-c286-4408-b2a1-6041ed6c435e-audit-dir\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.458678 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbdb2624-b774-4a00-980e-55364d66743e-serving-cert\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.458780 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8060832-7d55-42ed-8499-b42b0b3fdc8b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.458790 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-n54bt"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.460667 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.458909 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-etcd-client\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461031 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wzk\" (UniqueName: \"kubernetes.io/projected/3c586f4f-c286-4408-b2a1-6041ed6c435e-kube-api-access-86wzk\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461551 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-service-ca\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461692 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-oauth-config\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461794 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-oauth-serving-cert\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461889 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29d96e3-7aa4-4626-a245-93ee36f7595f-config\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461967 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hls6\" (UniqueName: \"kubernetes.io/projected/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-kube-api-access-9hls6\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462142 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11648315-9879-46e9-9aa1-78340652ebfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462271 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbdb2624-b774-4a00-980e-55364d66743e-config\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462850 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b29d96e3-7aa4-4626-a245-93ee36f7595f-config\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.461205 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462928 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbdb2624-b774-4a00-980e-55364d66743e-serving-cert\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462419 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5b75z"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462378 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8060832-7d55-42ed-8499-b42b0b3fdc8b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.462831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f672647-bac8-475b-879e-3f67cfe10017-proxy-tls\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463236 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463329 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d221639-40c0-4997-a42c-2e641f5793ab-metrics-tls\") pod \"dns-operator-744455d44c-wwmcx\" (UID: \"0d221639-40c0-4997-a42c-2e641f5793ab\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463411 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-config\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-default-certificate\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463564 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-encryption-config\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb945dc3-fcd0-43b6-a048-82682523848b-service-ca-bundle\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463739 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-client-ca\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.463897 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-serving-cert\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkt6\" (UniqueName: \"kubernetes.io/projected/b94e951c-fc0f-4225-89c3-d902f69a9ed6-kube-api-access-cxkt6\") pod \"multus-admission-controller-857f4d67dd-mg9wr\" (UID: \"b94e951c-fc0f-4225-89c3-d902f69a9ed6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464111 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-config\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464172 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-stats-auth\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-config\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464391 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-service-ca\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464695 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-client-ca\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.464760 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.465899 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/431b2824-783e-4112-a269-dbb3514adc8b-srv-cert\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.465962 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-machine-approver-tls\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.465982 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbdb2624-b774-4a00-980e-55364d66743e-trusted-ca\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466096 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466182 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-config\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466203 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzmqk\" (UniqueName: \"kubernetes.io/projected/11648315-9879-46e9-9aa1-78340652ebfd-kube-api-access-bzmqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466387 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8lwp\" (UniqueName: \"kubernetes.io/projected/d8060832-7d55-42ed-8499-b42b0b3fdc8b-kube-api-access-s8lwp\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f672647-bac8-475b-879e-3f67cfe10017-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466474 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/431b2824-783e-4112-a269-dbb3514adc8b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466499 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-proxy-tls\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-client\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466666 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4781bb1-96ff-48a7-8429-8831b12c9f3b-serving-cert\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466746 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-trusted-ca-bundle\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466825 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbdb2624-b774-4a00-980e-55364d66743e-config\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466830 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lnd\" (UniqueName: \"kubernetes.io/projected/8c6f1a5d-0119-4508-a7cd-5cb0eb522d01-kube-api-access-v6lnd\") pod \"downloads-7954f5f757-cfhwx\" (UID: \"8c6f1a5d-0119-4508-a7cd-5cb0eb522d01\") " pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466894 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b29d96e3-7aa4-4626-a245-93ee36f7595f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466927 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-webhook-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466952 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466971 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-serving-cert\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466994 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/40a7951a-a4f5-415c-8823-ca5d7428b792-tmpfs\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.467057 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg82f\" (UniqueName: \"kubernetes.io/projected/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-kube-api-access-qg82f\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.467106 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-audit-policies\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.466713 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11648315-9879-46e9-9aa1-78340652ebfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.467292 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cbdb2624-b774-4a00-980e-55364d66743e-trusted-ca\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.467431 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-config\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468564 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0d221639-40c0-4997-a42c-2e641f5793ab-metrics-tls\") pod \"dns-operator-744455d44c-wwmcx\" (UID: \"0d221639-40c0-4997-a42c-2e641f5793ab\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468681 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b29d96e3-7aa4-4626-a245-93ee36f7595f-images\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468726 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpsb\" (UniqueName: \"kubernetes.io/projected/b4781bb1-96ff-48a7-8429-8831b12c9f3b-kube-api-access-9hpsb\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468789 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj94s\" (UniqueName: \"kubernetes.io/projected/cd7b70c9-69ff-4c9b-8ec9-da135b66a909-kube-api-access-cj94s\") pod \"cluster-samples-operator-665b6dd947-tqksx\" (UID: \"cd7b70c9-69ff-4c9b-8ec9-da135b66a909\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468842 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48bh\" (UniqueName: \"kubernetes.io/projected/cbdb2624-b774-4a00-980e-55364d66743e-kube-api-access-h48bh\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468889 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvndg\" (UniqueName: \"kubernetes.io/projected/6f672647-bac8-475b-879e-3f67cfe10017-kube-api-access-kvndg\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-serving-cert\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.468947 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-apiservice-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469020 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-metrics-certs\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469051 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11648315-9879-46e9-9aa1-78340652ebfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469070 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdj9j\" (UniqueName: \"kubernetes.io/projected/b29d96e3-7aa4-4626-a245-93ee36f7595f-kube-api-access-xdj9j\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469090 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8060832-7d55-42ed-8499-b42b0b3fdc8b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469111 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttdfj\" (UniqueName: \"kubernetes.io/projected/dff8d615-db53-4198-adbd-b6f5fc1cd2be-kube-api-access-ttdfj\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469138 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9rbv\" (UniqueName: \"kubernetes.io/projected/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-kube-api-access-k9rbv\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469158 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b94e951c-fc0f-4225-89c3-d902f69a9ed6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mg9wr\" (UID: \"b94e951c-fc0f-4225-89c3-d902f69a9ed6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469178 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99fj8\" (UniqueName: \"kubernetes.io/projected/431b2824-783e-4112-a269-dbb3514adc8b-kube-api-access-99fj8\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469208 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h667\" (UniqueName: \"kubernetes.io/projected/0d221639-40c0-4997-a42c-2e641f5793ab-kube-api-access-4h667\") pod \"dns-operator-744455d44c-wwmcx\" (UID: \"0d221639-40c0-4997-a42c-2e641f5793ab\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469253 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv2jr\" (UniqueName: \"kubernetes.io/projected/fb945dc3-fcd0-43b6-a048-82682523848b-kube-api-access-wv2jr\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469271 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-service-ca\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469291 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-auth-proxy-config\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469310 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7b70c9-69ff-4c9b-8ec9-da135b66a909-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tqksx\" (UID: \"cd7b70c9-69ff-4c9b-8ec9-da135b66a909\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469330 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469349 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-images\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469379 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-ca\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469396 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-config\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469414 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hhf\" (UniqueName: \"kubernetes.io/projected/40a7951a-a4f5-415c-8823-ca5d7428b792-kube-api-access-v4hhf\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.469942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11648315-9879-46e9-9aa1-78340652ebfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.470032 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8060832-7d55-42ed-8499-b42b0b3fdc8b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.470365 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.471076 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b29d96e3-7aa4-4626-a245-93ee36f7595f-images\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.471190 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.472229 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-machine-approver-tls\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.472278 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b29d96e3-7aa4-4626-a245-93ee36f7595f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.473229 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.473484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-config\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.474865 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-ca\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.475434 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.475518 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-serving-cert\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.475562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd7b70c9-69ff-4c9b-8ec9-da135b66a909-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-tqksx\" (UID: \"cd7b70c9-69ff-4c9b-8ec9-da135b66a909\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.475865 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b4781bb1-96ff-48a7-8429-8831b12c9f3b-etcd-client\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.476950 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g96sr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.478256 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.479071 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-auth-proxy-config\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.479172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4781bb1-96ff-48a7-8429-8831b12c9f3b-serving-cert\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.482047 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.482250 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.485574 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dtsx4"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.486792 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w4c85"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.486914 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.487716 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-twpnr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.489439 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g69x5"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.489521 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.490253 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.493447 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.494453 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.495584 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.496694 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.500766 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.518358 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.518422 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psv9p"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.518436 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k4gzk"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.518451 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bff4w"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.518465 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.519117 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.520042 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.520552 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-twpnr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.521940 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wxjkg"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.523059 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mg9wr"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.523240 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.523926 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wxjkg"] Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.540769 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.559924 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570069 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-serving-cert\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570123 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/40a7951a-a4f5-415c-8823-ca5d7428b792-tmpfs\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570144 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg82f\" (UniqueName: \"kubernetes.io/projected/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-kube-api-access-qg82f\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-audit-policies\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570220 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvndg\" (UniqueName: \"kubernetes.io/projected/6f672647-bac8-475b-879e-3f67cfe10017-kube-api-access-kvndg\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570237 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-apiservice-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-metrics-certs\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttdfj\" (UniqueName: \"kubernetes.io/projected/dff8d615-db53-4198-adbd-b6f5fc1cd2be-kube-api-access-ttdfj\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570302 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b94e951c-fc0f-4225-89c3-d902f69a9ed6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mg9wr\" (UID: \"b94e951c-fc0f-4225-89c3-d902f69a9ed6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570321 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99fj8\" (UniqueName: \"kubernetes.io/projected/431b2824-783e-4112-a269-dbb3514adc8b-kube-api-access-99fj8\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570346 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570361 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv2jr\" (UniqueName: \"kubernetes.io/projected/fb945dc3-fcd0-43b6-a048-82682523848b-kube-api-access-wv2jr\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-service-ca\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570392 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570406 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-images\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570428 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4hhf\" (UniqueName: \"kubernetes.io/projected/40a7951a-a4f5-415c-8823-ca5d7428b792-kube-api-access-v4hhf\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570447 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c586f4f-c286-4408-b2a1-6041ed6c435e-audit-dir\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570463 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-etcd-client\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570479 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86wzk\" (UniqueName: \"kubernetes.io/projected/3c586f4f-c286-4408-b2a1-6041ed6c435e-kube-api-access-86wzk\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570498 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-oauth-config\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570515 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-oauth-serving-cert\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570535 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f672647-bac8-475b-879e-3f67cfe10017-proxy-tls\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570580 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-default-certificate\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570598 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb945dc3-fcd0-43b6-a048-82682523848b-service-ca-bundle\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570614 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-encryption-config\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570629 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxkt6\" (UniqueName: \"kubernetes.io/projected/b94e951c-fc0f-4225-89c3-d902f69a9ed6-kube-api-access-cxkt6\") pod \"multus-admission-controller-857f4d67dd-mg9wr\" (UID: \"b94e951c-fc0f-4225-89c3-d902f69a9ed6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570645 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-config\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570677 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-serving-cert\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570694 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/431b2824-783e-4112-a269-dbb3514adc8b-srv-cert\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570714 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-stats-auth\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570738 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570770 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f672647-bac8-475b-879e-3f67cfe10017-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570785 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-proxy-tls\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570801 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570817 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/431b2824-783e-4112-a269-dbb3514adc8b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570834 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-trusted-ca-bundle\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.570858 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-webhook-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.571134 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3c586f4f-c286-4408-b2a1-6041ed6c435e-audit-dir\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.571718 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb945dc3-fcd0-43b6-a048-82682523848b-service-ca-bundle\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.571766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-audit-policies\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.572102 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.572909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/40a7951a-a4f5-415c-8823-ca5d7428b792-tmpfs\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.573332 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3c586f4f-c286-4408-b2a1-6041ed6c435e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.573414 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.574821 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-metrics-certs\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.576360 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-etcd-client\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.573697 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6f672647-bac8-475b-879e-3f67cfe10017-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.587630 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-default-certificate\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.587686 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fb945dc3-fcd0-43b6-a048-82682523848b-stats-auth\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.587704 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-serving-cert\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.589234 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3c586f4f-c286-4408-b2a1-6041ed6c435e-encryption-config\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.589480 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.599834 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.619612 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.640373 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.664941 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.684999 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.702370 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.720092 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.740541 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.760756 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.780340 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.800620 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.819747 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.840430 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.859643 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.880409 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.900464 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.920281 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.945485 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.959927 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.979685 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.982266 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:36 crc kubenswrapper[4904]: I1121 13:34:36.999728 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.020290 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.027206 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.040791 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.060028 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.080256 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.101436 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.120590 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.129410 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f672647-bac8-475b-879e-3f67cfe10017-proxy-tls\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.141174 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.155530 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b94e951c-fc0f-4225-89c3-d902f69a9ed6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mg9wr\" (UID: \"b94e951c-fc0f-4225-89c3-d902f69a9ed6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.161300 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.180989 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.200395 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.221055 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.241672 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.261630 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.267734 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/431b2824-783e-4112-a269-dbb3514adc8b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.281031 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.282686 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-oauth-serving-cert\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.300123 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.327765 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.333521 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-serving-cert\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.341495 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.343326 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-config\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.359050 4904 request.go:700] Waited for 1.016048175s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&limit=500&resourceVersion=0 Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.361011 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.362942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-service-ca\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.380186 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.385927 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-oauth-config\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.406538 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.414852 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-trusted-ca-bundle\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.419879 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.440000 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.445935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/431b2824-783e-4112-a269-dbb3514adc8b-srv-cert\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.480613 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vssml\" (UniqueName: \"kubernetes.io/projected/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-kube-api-access-vssml\") pod \"controller-manager-879f6c89f-wkqr4\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.496284 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffcsm\" (UniqueName: \"kubernetes.io/projected/df1572fa-33e4-4828-b32d-9721b2df142d-kube-api-access-ffcsm\") pod \"oauth-openshift-558db77b4-7bm6t\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.555235 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcxd\" (UniqueName: \"kubernetes.io/projected/149c997e-8119-469c-afd2-4b0bc403e07a-kube-api-access-5zcxd\") pod \"apiserver-76f77b778f-9sdsl\" (UID: \"149c997e-8119-469c-afd2-4b0bc403e07a\") " pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.560733 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.561274 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs967\" (UniqueName: \"kubernetes.io/projected/45113c1a-4545-4ec0-a1f7-f387bf548d6f-kube-api-access-fs967\") pod \"openshift-config-operator-7777fb866f-5b75z\" (UID: \"45113c1a-4545-4ec0-a1f7-f387bf548d6f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.570777 4904 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.570885 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-apiservice-cert podName:40a7951a-a4f5-415c-8823-ca5d7428b792 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:38.070859722 +0000 UTC m=+152.192392294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-apiservice-cert") pod "packageserver-d55dfcdfc-dzb8c" (UID: "40a7951a-a4f5-415c-8823-ca5d7428b792") : failed to sync secret cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.571019 4904 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.571120 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-webhook-cert podName:40a7951a-a4f5-415c-8823-ca5d7428b792 nodeName:}" failed. No retries permitted until 2025-11-21 13:34:38.071093788 +0000 UTC m=+152.192626410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-webhook-cert") pod "packageserver-d55dfcdfc-dzb8c" (UID: "40a7951a-a4f5-415c-8823-ca5d7428b792") : failed to sync secret cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.574017 4904 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.574063 4904 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.574115 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-proxy-tls podName:b5b8159d-494a-4eb8-bdf9-6facb27ecf0d nodeName:}" failed. No retries permitted until 2025-11-21 13:34:38.074092262 +0000 UTC m=+152.195624914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-proxy-tls") pod "machine-config-operator-74547568cd-twwpr" (UID: "b5b8159d-494a-4eb8-bdf9-6facb27ecf0d") : failed to sync secret cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: E1121 13:34:37.574138 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-images podName:b5b8159d-494a-4eb8-bdf9-6facb27ecf0d nodeName:}" failed. No retries permitted until 2025-11-21 13:34:38.074128833 +0000 UTC m=+152.195661505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-images") pod "machine-config-operator-74547568cd-twwpr" (UID: "b5b8159d-494a-4eb8-bdf9-6facb27ecf0d") : failed to sync configmap cache: timed out waiting for the condition Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.581097 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.600876 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.620558 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.640362 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.661908 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.672706 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.685891 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.703025 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.713440 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.721555 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.727010 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.741059 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.761439 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.780329 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.800748 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.820893 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.840464 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.861905 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.881610 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.888542 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wkqr4"] Nov 21 13:34:37 crc kubenswrapper[4904]: W1121 13:34:37.898154 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6456a0e_a019_4fe5_86ae_0f3ef7cdcdf5.slice/crio-76a1cfc6268e02d4289d91dc0e5edf40fc7376a47b5b5e0330345b168972e5a6 WatchSource:0}: Error finding container 76a1cfc6268e02d4289d91dc0e5edf40fc7376a47b5b5e0330345b168972e5a6: Status 404 returned error can't find the container with id 76a1cfc6268e02d4289d91dc0e5edf40fc7376a47b5b5e0330345b168972e5a6 Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.901791 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.924019 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.924062 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5b75z"] Nov 21 13:34:37 crc kubenswrapper[4904]: W1121 13:34:37.933369 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45113c1a_4545_4ec0_a1f7_f387bf548d6f.slice/crio-26f67e28a26a7c7301adbcc8df066ee660b8069f0db753693183d52ab3c32308 WatchSource:0}: Error finding container 26f67e28a26a7c7301adbcc8df066ee660b8069f0db753693183d52ab3c32308: Status 404 returned error can't find the container with id 26f67e28a26a7c7301adbcc8df066ee660b8069f0db753693183d52ab3c32308 Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.940784 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.955351 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9sdsl"] Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.960770 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.984253 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7bm6t"] Nov 21 13:34:37 crc kubenswrapper[4904]: I1121 13:34:37.985958 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 21 13:34:37 crc kubenswrapper[4904]: W1121 13:34:37.991398 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf1572fa_33e4_4828_b32d_9721b2df142d.slice/crio-014f2e045780d1a94dcc90cf78ca0a1c649c80ee4771be1d1e19a381748c8bbe WatchSource:0}: Error finding container 014f2e045780d1a94dcc90cf78ca0a1c649c80ee4771be1d1e19a381748c8bbe: Status 404 returned error can't find the container with id 014f2e045780d1a94dcc90cf78ca0a1c649c80ee4771be1d1e19a381748c8bbe Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.000079 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.020155 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.039906 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.060611 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.081209 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.091144 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-apiservice-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.091441 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-images\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.091612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-proxy-tls\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.091678 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-webhook-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.092803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-images\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.097104 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-apiservice-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.097116 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/40a7951a-a4f5-415c-8823-ca5d7428b792-webhook-cert\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.098947 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-proxy-tls\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.101216 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.119776 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.155170 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hls6\" (UniqueName: \"kubernetes.io/projected/2b5a0949-d11a-49fb-b3ac-023c6ac6abd8-kube-api-access-9hls6\") pod \"machine-approver-56656f9798-p6p2w\" (UID: \"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.174169 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lnd\" (UniqueName: \"kubernetes.io/projected/8c6f1a5d-0119-4508-a7cd-5cb0eb522d01-kube-api-access-v6lnd\") pod \"downloads-7954f5f757-cfhwx\" (UID: \"8c6f1a5d-0119-4508-a7cd-5cb0eb522d01\") " pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.196766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8lwp\" (UniqueName: \"kubernetes.io/projected/d8060832-7d55-42ed-8499-b42b0b3fdc8b-kube-api-access-s8lwp\") pod \"openshift-apiserver-operator-796bbdcf4f-vc2gr\" (UID: \"d8060832-7d55-42ed-8499-b42b0b3fdc8b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.216359 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzmqk\" (UniqueName: \"kubernetes.io/projected/11648315-9879-46e9-9aa1-78340652ebfd-kube-api-access-bzmqk\") pod \"openshift-controller-manager-operator-756b6f6bc6-6rfll\" (UID: \"11648315-9879-46e9-9aa1-78340652ebfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.240494 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj94s\" (UniqueName: \"kubernetes.io/projected/cd7b70c9-69ff-4c9b-8ec9-da135b66a909-kube-api-access-cj94s\") pod \"cluster-samples-operator-665b6dd947-tqksx\" (UID: \"cd7b70c9-69ff-4c9b-8ec9-da135b66a909\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.243017 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.254987 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48bh\" (UniqueName: \"kubernetes.io/projected/cbdb2624-b774-4a00-980e-55364d66743e-kube-api-access-h48bh\") pod \"console-operator-58897d9998-fb4wt\" (UID: \"cbdb2624-b774-4a00-980e-55364d66743e\") " pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:38 crc kubenswrapper[4904]: W1121 13:34:38.255692 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5a0949_d11a_49fb_b3ac_023c6ac6abd8.slice/crio-83d3c64916ea828217a983b67efb60d020de51fc8c9a24bcacf9e19073a9b168 WatchSource:0}: Error finding container 83d3c64916ea828217a983b67efb60d020de51fc8c9a24bcacf9e19073a9b168: Status 404 returned error can't find the container with id 83d3c64916ea828217a983b67efb60d020de51fc8c9a24bcacf9e19073a9b168 Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.274190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpsb\" (UniqueName: \"kubernetes.io/projected/b4781bb1-96ff-48a7-8429-8831b12c9f3b-kube-api-access-9hpsb\") pod \"etcd-operator-b45778765-84l77\" (UID: \"b4781bb1-96ff-48a7-8429-8831b12c9f3b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.297399 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdj9j\" (UniqueName: \"kubernetes.io/projected/b29d96e3-7aa4-4626-a245-93ee36f7595f-kube-api-access-xdj9j\") pod \"machine-api-operator-5694c8668f-qdh9s\" (UID: \"b29d96e3-7aa4-4626-a245-93ee36f7595f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.315447 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9rbv\" (UniqueName: \"kubernetes.io/projected/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-kube-api-access-k9rbv\") pod \"route-controller-manager-6576b87f9c-gqj7v\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.336803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h667\" (UniqueName: \"kubernetes.io/projected/0d221639-40c0-4997-a42c-2e641f5793ab-kube-api-access-4h667\") pod \"dns-operator-744455d44c-wwmcx\" (UID: \"0d221639-40c0-4997-a42c-2e641f5793ab\") " pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.342671 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.360776 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" event={"ID":"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8","Type":"ContainerStarted","Data":"83d3c64916ea828217a983b67efb60d020de51fc8c9a24bcacf9e19073a9b168"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.361123 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.362716 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.363533 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" event={"ID":"df1572fa-33e4-4828-b32d-9721b2df142d","Type":"ContainerStarted","Data":"1f33b89ef8f1391622ce75eb2b2952fa8eefec5601a2e24f079ad58520fe39de"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.363621 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" event={"ID":"df1572fa-33e4-4828-b32d-9721b2df142d","Type":"ContainerStarted","Data":"014f2e045780d1a94dcc90cf78ca0a1c649c80ee4771be1d1e19a381748c8bbe"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.363781 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.365398 4904 generic.go:334] "Generic (PLEG): container finished" podID="45113c1a-4545-4ec0-a1f7-f387bf548d6f" containerID="e4e2f3aa71bb920dd0dac648f1590d755193700d3dab3421a9f8578d504ef194" exitCode=0 Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.365463 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" event={"ID":"45113c1a-4545-4ec0-a1f7-f387bf548d6f","Type":"ContainerDied","Data":"e4e2f3aa71bb920dd0dac648f1590d755193700d3dab3421a9f8578d504ef194"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.365491 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" event={"ID":"45113c1a-4545-4ec0-a1f7-f387bf548d6f","Type":"ContainerStarted","Data":"26f67e28a26a7c7301adbcc8df066ee660b8069f0db753693183d52ab3c32308"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.367062 4904 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7bm6t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.367105 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.368621 4904 generic.go:334] "Generic (PLEG): container finished" podID="149c997e-8119-469c-afd2-4b0bc403e07a" containerID="3f5eaa32606ec1267ca1abebdd44723f9609221f3fd12f6f00f8ad5ba013ebf0" exitCode=0 Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.369385 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" event={"ID":"149c997e-8119-469c-afd2-4b0bc403e07a","Type":"ContainerDied","Data":"3f5eaa32606ec1267ca1abebdd44723f9609221f3fd12f6f00f8ad5ba013ebf0"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.369422 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" event={"ID":"149c997e-8119-469c-afd2-4b0bc403e07a","Type":"ContainerStarted","Data":"727add7f24157e2475f50aedabc385d3dce16e53d3d79ff8198caafb4e0cef97"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.371490 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" event={"ID":"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5","Type":"ContainerStarted","Data":"ff3c84f4a4346f02575943f8a97211163143b3bcc9972b205d32f95aa98a4b92"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.371597 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" event={"ID":"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5","Type":"ContainerStarted","Data":"76a1cfc6268e02d4289d91dc0e5edf40fc7376a47b5b5e0330345b168972e5a6"} Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.372069 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.373389 4904 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wkqr4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.373439 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" podUID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.378399 4904 request.go:700] Waited for 1.891091727s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.379989 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.404083 4904 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.409945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.420782 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.421386 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.433277 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.443085 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.443707 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.456508 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.460681 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.503256 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.507736 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.507892 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.544349 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6599d2b5-6e0e-4d90-a7ee-f4b804ee185d-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tfzhv\" (UID: \"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.550007 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.550457 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.574076 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvndg\" (UniqueName: \"kubernetes.io/projected/6f672647-bac8-475b-879e-3f67cfe10017-kube-api-access-kvndg\") pod \"machine-config-controller-84d6567774-8l6wj\" (UID: \"6f672647-bac8-475b-879e-3f67cfe10017\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.590466 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttdfj\" (UniqueName: \"kubernetes.io/projected/dff8d615-db53-4198-adbd-b6f5fc1cd2be-kube-api-access-ttdfj\") pod \"console-f9d7485db-g69x5\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.607016 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99fj8\" (UniqueName: \"kubernetes.io/projected/431b2824-783e-4112-a269-dbb3514adc8b-kube-api-access-99fj8\") pod \"olm-operator-6b444d44fb-qvbx4\" (UID: \"431b2824-783e-4112-a269-dbb3514adc8b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.629942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv2jr\" (UniqueName: \"kubernetes.io/projected/fb945dc3-fcd0-43b6-a048-82682523848b-kube-api-access-wv2jr\") pod \"router-default-5444994796-5tm89\" (UID: \"fb945dc3-fcd0-43b6-a048-82682523848b\") " pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.647063 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr"] Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.649608 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4hhf\" (UniqueName: \"kubernetes.io/projected/40a7951a-a4f5-415c-8823-ca5d7428b792-kube-api-access-v4hhf\") pod \"packageserver-d55dfcdfc-dzb8c\" (UID: \"40a7951a-a4f5-415c-8823-ca5d7428b792\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.656744 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg82f\" (UniqueName: \"kubernetes.io/projected/b5b8159d-494a-4eb8-bdf9-6facb27ecf0d-kube-api-access-qg82f\") pod \"machine-config-operator-74547568cd-twwpr\" (UID: \"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.663943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.693205 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxkt6\" (UniqueName: \"kubernetes.io/projected/b94e951c-fc0f-4225-89c3-d902f69a9ed6-kube-api-access-cxkt6\") pod \"multus-admission-controller-857f4d67dd-mg9wr\" (UID: \"b94e951c-fc0f-4225-89c3-d902f69a9ed6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.694950 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.706848 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" Nov 21 13:34:38 crc kubenswrapper[4904]: W1121 13:34:38.707016 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8060832_7d55_42ed_8499_b42b0b3fdc8b.slice/crio-db2c2e9db1064dba470335d117f637826650a297148238f0c2a4bf521f508088 WatchSource:0}: Error finding container db2c2e9db1064dba470335d117f637826650a297148238f0c2a4bf521f508088: Status 404 returned error can't find the container with id db2c2e9db1064dba470335d117f637826650a297148238f0c2a4bf521f508088 Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.715550 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.718968 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86wzk\" (UniqueName: \"kubernetes.io/projected/3c586f4f-c286-4408-b2a1-6041ed6c435e-kube-api-access-86wzk\") pod \"apiserver-7bbb656c7d-qnqnz\" (UID: \"3c586f4f-c286-4408-b2a1-6041ed6c435e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.725350 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.736475 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.742647 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.807970 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db301386-8de5-4553-a48d-fd858d4fc6f9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808007 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5cbe6fd6-8503-46d7-bf62-fa55adda1648-metrics-tls\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-tls\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808072 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-config\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808095 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/61b9b037-b82b-4c71-a535-aaae2a66dee1-srv-cert\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808114 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtxm\" (UniqueName: \"kubernetes.io/projected/d19a8a38-94ec-4377-8f90-24f34f5fb547-kube-api-access-vjtxm\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808130 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19a8a38-94ec-4377-8f90-24f34f5fb547-secret-volume\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808147 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4151cb5-63cb-45ad-b192-cc67dbea2add-serving-cert\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808161 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19a8a38-94ec-4377-8f90-24f34f5fb547-config-volume\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808192 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cbe6fd6-8503-46d7-bf62-fa55adda1648-trusted-ca\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808231 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8fb172-370c-4941-bf58-ce44f243ae48-config\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808279 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808299 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cbe6fd6-8503-46d7-bf62-fa55adda1648-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808344 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3cdae8-1188-4f28-b434-80ce839aa050-config\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808369 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db301386-8de5-4553-a48d-fd858d4fc6f9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808384 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8z9v\" (UniqueName: \"kubernetes.io/projected/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-kube-api-access-p8z9v\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808415 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt96h\" (UniqueName: \"kubernetes.io/projected/d4151cb5-63cb-45ad-b192-cc67dbea2add-kube-api-access-lt96h\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808449 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-bound-sa-token\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808465 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-service-ca-bundle\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8fb172-370c-4941-bf58-ce44f243ae48-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b3cdae8-1188-4f28-b434-80ce839aa050-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808555 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/61b9b037-b82b-4c71-a535-aaae2a66dee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808586 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pql5s\" (UniqueName: \"kubernetes.io/projected/61b9b037-b82b-4c71-a535-aaae2a66dee1-kube-api-access-pql5s\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808671 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-certificates\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808699 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfh6\" (UniqueName: \"kubernetes.io/projected/5cbe6fd6-8503-46d7-bf62-fa55adda1648-kube-api-access-jdfh6\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808733 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6tl\" (UniqueName: \"kubernetes.io/projected/e44ff011-5019-4148-a16a-85c08b435a70-kube-api-access-8k6tl\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808759 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2q4\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-kube-api-access-2t2q4\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808788 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cnj5\" (UniqueName: \"kubernetes.io/projected/22e8702a-b556-4a2d-a3bc-d27bef18cfb2-kube-api-access-7cnj5\") pod \"migrator-59844c95c7-j2pbx\" (UID: \"22e8702a-b556-4a2d-a3bc-d27bef18cfb2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808850 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-trusted-ca\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808865 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3cdae8-1188-4f28-b434-80ce839aa050-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808879 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e44ff011-5019-4148-a16a-85c08b435a70-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808900 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e44ff011-5019-4148-a16a-85c08b435a70-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.808981 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf8fb172-370c-4941-bf58-ce44f243ae48-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: E1121 13:34:38.809283 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.309269013 +0000 UTC m=+153.430801565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.861726 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.896567 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915355 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915600 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-mountpoint-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915635 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3cdae8-1188-4f28-b434-80ce839aa050-config\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915695 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db301386-8de5-4553-a48d-fd858d4fc6f9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915726 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8z9v\" (UniqueName: \"kubernetes.io/projected/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-kube-api-access-p8z9v\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915761 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915785 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-metrics-tls\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915809 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt96h\" (UniqueName: \"kubernetes.io/projected/d4151cb5-63cb-45ad-b192-cc67dbea2add-kube-api-access-lt96h\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915879 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-bound-sa-token\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915900 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-service-ca-bundle\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915922 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-serving-cert\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.915967 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-registration-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8fb172-370c-4941-bf58-ce44f243ae48-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916060 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b3cdae8-1188-4f28-b434-80ce839aa050-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916144 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-csi-data-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916187 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/61b9b037-b82b-4c71-a535-aaae2a66dee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916208 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d0604709-7352-499d-b30c-69ece156a3db-signing-cabundle\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916231 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ww5r\" (UniqueName: \"kubernetes.io/projected/4b0cf0a0-036d-4e79-836e-208aa70df688-kube-api-access-6ww5r\") pod \"control-plane-machine-set-operator-78cbb6b69f-tl29t\" (UID: \"4b0cf0a0-036d-4e79-836e-208aa70df688\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916253 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxrck\" (UniqueName: \"kubernetes.io/projected/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-kube-api-access-rxrck\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916299 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916343 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916365 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pql5s\" (UniqueName: \"kubernetes.io/projected/61b9b037-b82b-4c71-a535-aaae2a66dee1-kube-api-access-pql5s\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916389 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6n7j\" (UniqueName: \"kubernetes.io/projected/60d041e7-e6d2-441c-8b41-e908873c0d41-kube-api-access-b6n7j\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916410 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38d4a7a2-512f-4077-8126-23fb08980b61-cert\") pod \"ingress-canary-k4gzk\" (UID: \"38d4a7a2-512f-4077-8126-23fb08980b61\") " pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916431 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916483 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-certificates\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfh6\" (UniqueName: \"kubernetes.io/projected/5cbe6fd6-8503-46d7-bf62-fa55adda1648-kube-api-access-jdfh6\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916598 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916646 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8h8v\" (UniqueName: \"kubernetes.io/projected/62b52056-10f9-4150-ae0f-b443b165074d-kube-api-access-c8h8v\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916705 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k6tl\" (UniqueName: \"kubernetes.io/projected/e44ff011-5019-4148-a16a-85c08b435a70-kube-api-access-8k6tl\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916726 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6sx\" (UniqueName: \"kubernetes.io/projected/d0604709-7352-499d-b30c-69ece156a3db-kube-api-access-9n6sx\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916747 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ksq\" (UniqueName: \"kubernetes.io/projected/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-kube-api-access-v2ksq\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916782 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t2q4\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-kube-api-access-2t2q4\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916829 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-node-bootstrap-token\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916851 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldm9\" (UniqueName: \"kubernetes.io/projected/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-kube-api-access-8ldm9\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916886 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d0604709-7352-499d-b30c-69ece156a3db-signing-key\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916910 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c49062f-5332-4b31-ac46-233bc7afd0c8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rh865\" (UID: \"5c49062f-5332-4b31-ac46-233bc7afd0c8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqfx\" (UniqueName: \"kubernetes.io/projected/5c49062f-5332-4b31-ac46-233bc7afd0c8-kube-api-access-wqqfx\") pod \"package-server-manager-789f6589d5-rh865\" (UID: \"5c49062f-5332-4b31-ac46-233bc7afd0c8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916950 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-config-volume\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.916998 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cnj5\" (UniqueName: \"kubernetes.io/projected/22e8702a-b556-4a2d-a3bc-d27bef18cfb2-kube-api-access-7cnj5\") pod \"migrator-59844c95c7-j2pbx\" (UID: \"22e8702a-b556-4a2d-a3bc-d27bef18cfb2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917076 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-trusted-ca\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917100 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmgwk\" (UniqueName: \"kubernetes.io/projected/38d4a7a2-512f-4077-8126-23fb08980b61-kube-api-access-pmgwk\") pod \"ingress-canary-k4gzk\" (UID: \"38d4a7a2-512f-4077-8126-23fb08980b61\") " pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917122 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3cdae8-1188-4f28-b434-80ce839aa050-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917161 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e44ff011-5019-4148-a16a-85c08b435a70-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917183 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e44ff011-5019-4148-a16a-85c08b435a70-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917205 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf8fb172-370c-4941-bf58-ce44f243ae48-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917289 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db301386-8de5-4553-a48d-fd858d4fc6f9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917366 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-socket-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917420 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5cbe6fd6-8503-46d7-bf62-fa55adda1648-metrics-tls\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917447 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-tls\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917491 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b0cf0a0-036d-4e79-836e-208aa70df688-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tl29t\" (UID: \"4b0cf0a0-036d-4e79-836e-208aa70df688\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917510 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-config\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/61b9b037-b82b-4c71-a535-aaae2a66dee1-srv-cert\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:38 crc kubenswrapper[4904]: E1121 13:34:38.917563 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.417527025 +0000 UTC m=+153.539059577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917689 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtxm\" (UniqueName: \"kubernetes.io/projected/d19a8a38-94ec-4377-8f90-24f34f5fb547-kube-api-access-vjtxm\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917730 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-plugins-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.917818 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19a8a38-94ec-4377-8f90-24f34f5fb547-secret-volume\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918431 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4151cb5-63cb-45ad-b192-cc67dbea2add-serving-cert\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918466 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19a8a38-94ec-4377-8f90-24f34f5fb547-config-volume\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918494 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-config\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918587 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cbe6fd6-8503-46d7-bf62-fa55adda1648-trusted-ca\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918784 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918836 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8fb172-370c-4941-bf58-ce44f243ae48-config\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918855 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918877 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cbe6fd6-8503-46d7-bf62-fa55adda1648-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.918898 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-certs\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.921010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8fb172-370c-4941-bf58-ce44f243ae48-config\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:38 crc kubenswrapper[4904]: E1121 13:34:38.920502 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.420485008 +0000 UTC m=+153.542017560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.922908 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3cdae8-1188-4f28-b434-80ce839aa050-config\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.927872 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-config\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.928714 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-trusted-ca\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.929640 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cbe6fd6-8503-46d7-bf62-fa55adda1648-trusted-ca\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.933128 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19a8a38-94ec-4377-8f90-24f34f5fb547-config-volume\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.937010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.943035 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-service-ca-bundle\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.944362 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-certificates\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.944818 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db301386-8de5-4553-a48d-fd858d4fc6f9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.987503 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cfhwx"] Nov 21 13:34:38 crc kubenswrapper[4904]: I1121 13:34:38.987592 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.008165 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19a8a38-94ec-4377-8f90-24f34f5fb547-secret-volume\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.008761 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3cdae8-1188-4f28-b434-80ce839aa050-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.009324 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e44ff011-5019-4148-a16a-85c08b435a70-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.010010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5cbe6fd6-8503-46d7-bf62-fa55adda1648-metrics-tls\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.010598 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/61b9b037-b82b-4c71-a535-aaae2a66dee1-srv-cert\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.018478 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db301386-8de5-4553-a48d-fd858d4fc6f9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.019120 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8fb172-370c-4941-bf58-ce44f243ae48-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.022925 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4151cb5-63cb-45ad-b192-cc67dbea2add-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.024545 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b3cdae8-1188-4f28-b434-80ce839aa050-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-59smg\" (UID: \"6b3cdae8-1188-4f28-b434-80ce839aa050\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.026551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-tls\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.028063 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4151cb5-63cb-45ad-b192-cc67dbea2add-serving-cert\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.029125 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e44ff011-5019-4148-a16a-85c08b435a70-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.029968 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cbe6fd6-8503-46d7-bf62-fa55adda1648-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.031252 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.031324 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.531304303 +0000 UTC m=+153.652836855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032079 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-plugins-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032125 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-config\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032241 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-certs\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032270 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-mountpoint-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032313 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-metrics-tls\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032359 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-serving-cert\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032388 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-registration-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-csi-data-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032462 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d0604709-7352-499d-b30c-69ece156a3db-signing-cabundle\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032481 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ww5r\" (UniqueName: \"kubernetes.io/projected/4b0cf0a0-036d-4e79-836e-208aa70df688-kube-api-access-6ww5r\") pod \"control-plane-machine-set-operator-78cbb6b69f-tl29t\" (UID: \"4b0cf0a0-036d-4e79-836e-208aa70df688\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxrck\" (UniqueName: \"kubernetes.io/projected/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-kube-api-access-rxrck\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032555 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6n7j\" (UniqueName: \"kubernetes.io/projected/60d041e7-e6d2-441c-8b41-e908873c0d41-kube-api-access-b6n7j\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032593 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38d4a7a2-512f-4077-8126-23fb08980b61-cert\") pod \"ingress-canary-k4gzk\" (UID: \"38d4a7a2-512f-4077-8126-23fb08980b61\") " pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032639 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032746 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8h8v\" (UniqueName: \"kubernetes.io/projected/62b52056-10f9-4150-ae0f-b443b165074d-kube-api-access-c8h8v\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032786 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6sx\" (UniqueName: \"kubernetes.io/projected/d0604709-7352-499d-b30c-69ece156a3db-kube-api-access-9n6sx\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032803 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ksq\" (UniqueName: \"kubernetes.io/projected/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-kube-api-access-v2ksq\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032838 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-node-bootstrap-token\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032858 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldm9\" (UniqueName: \"kubernetes.io/projected/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-kube-api-access-8ldm9\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032879 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d0604709-7352-499d-b30c-69ece156a3db-signing-key\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032910 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c49062f-5332-4b31-ac46-233bc7afd0c8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rh865\" (UID: \"5c49062f-5332-4b31-ac46-233bc7afd0c8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032933 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqfx\" (UniqueName: \"kubernetes.io/projected/5c49062f-5332-4b31-ac46-233bc7afd0c8-kube-api-access-wqqfx\") pod \"package-server-manager-789f6589d5-rh865\" (UID: \"5c49062f-5332-4b31-ac46-233bc7afd0c8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032953 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-config-volume\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.032962 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.033104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmgwk\" (UniqueName: \"kubernetes.io/projected/38d4a7a2-512f-4077-8126-23fb08980b61-kube-api-access-pmgwk\") pod \"ingress-canary-k4gzk\" (UID: \"38d4a7a2-512f-4077-8126-23fb08980b61\") " pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.033224 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-socket-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.033277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b0cf0a0-036d-4e79-836e-208aa70df688-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tl29t\" (UID: \"4b0cf0a0-036d-4e79-836e-208aa70df688\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.033583 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-config-volume\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.035938 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t2q4\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-kube-api-access-2t2q4\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.038849 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-plugins-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.038913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-mountpoint-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.039585 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-config\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.040176 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-csi-data-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.040421 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-registration-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.041122 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.541097655 +0000 UTC m=+153.662630227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.042205 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d0604709-7352-499d-b30c-69ece156a3db-signing-cabundle\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.044404 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62b52056-10f9-4150-ae0f-b443b165074d-socket-dir\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.044534 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-certs\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.053339 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-node-bootstrap-token\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.053881 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-metrics-tls\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.054073 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/61b9b037-b82b-4c71-a535-aaae2a66dee1-profile-collector-cert\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.054739 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c49062f-5332-4b31-ac46-233bc7afd0c8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rh865\" (UID: \"5c49062f-5332-4b31-ac46-233bc7afd0c8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.055637 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.058199 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b0cf0a0-036d-4e79-836e-208aa70df688-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-tl29t\" (UID: \"4b0cf0a0-036d-4e79-836e-208aa70df688\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.059497 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fb4wt"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.060451 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.062780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k6tl\" (UniqueName: \"kubernetes.io/projected/e44ff011-5019-4148-a16a-85c08b435a70-kube-api-access-8k6tl\") pod \"kube-storage-version-migrator-operator-b67b599dd-vgbh2\" (UID: \"e44ff011-5019-4148-a16a-85c08b435a70\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.069056 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8z9v\" (UniqueName: \"kubernetes.io/projected/eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33-kube-api-access-p8z9v\") pod \"cluster-image-registry-operator-dc59b4c8b-xc7j6\" (UID: \"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.100350 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-serving-cert\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.100977 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d0604709-7352-499d-b30c-69ece156a3db-signing-key\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.101206 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.101336 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/38d4a7a2-512f-4077-8126-23fb08980b61-cert\") pod \"ingress-canary-k4gzk\" (UID: \"38d4a7a2-512f-4077-8126-23fb08980b61\") " pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.106450 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt96h\" (UniqueName: \"kubernetes.io/projected/d4151cb5-63cb-45ad-b192-cc67dbea2add-kube-api-access-lt96h\") pod \"authentication-operator-69f744f599-n54bt\" (UID: \"d4151cb5-63cb-45ad-b192-cc67dbea2add\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.106556 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdh9s"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.109202 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.119776 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wwmcx"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.120741 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-84l77"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.133818 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.134335 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.634318954 +0000 UTC m=+153.755851506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.144268 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-bound-sa-token\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.150079 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cnj5\" (UniqueName: \"kubernetes.io/projected/22e8702a-b556-4a2d-a3bc-d27bef18cfb2-kube-api-access-7cnj5\") pod \"migrator-59844c95c7-j2pbx\" (UID: \"22e8702a-b556-4a2d-a3bc-d27bef18cfb2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.153524 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtxm\" (UniqueName: \"kubernetes.io/projected/d19a8a38-94ec-4377-8f90-24f34f5fb547-kube-api-access-vjtxm\") pod \"collect-profiles-29395530-mwq7r\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.165332 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfh6\" (UniqueName: \"kubernetes.io/projected/5cbe6fd6-8503-46d7-bf62-fa55adda1648-kube-api-access-jdfh6\") pod \"ingress-operator-5b745b69d9-tfgxk\" (UID: \"5cbe6fd6-8503-46d7-bf62-fa55adda1648\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.171521 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.176967 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.182112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf8fb172-370c-4941-bf58-ce44f243ae48-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-rnsg8\" (UID: \"bf8fb172-370c-4941-bf58-ce44f243ae48\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.185044 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.211679 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.222871 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.226118 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pql5s\" (UniqueName: \"kubernetes.io/projected/61b9b037-b82b-4c71-a535-aaae2a66dee1-kube-api-access-pql5s\") pod \"catalog-operator-68c6474976-nvtq4\" (UID: \"61b9b037-b82b-4c71-a535-aaae2a66dee1\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.235305 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.235858 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.735834648 +0000 UTC m=+153.857367380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.247542 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxrck\" (UniqueName: \"kubernetes.io/projected/a3a0f1f8-7ed7-451e-a940-6de0c3e53829-kube-api-access-rxrck\") pod \"dns-default-wxjkg\" (UID: \"a3a0f1f8-7ed7-451e-a940-6de0c3e53829\") " pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.247980 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.255126 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6sx\" (UniqueName: \"kubernetes.io/projected/d0604709-7352-499d-b30c-69ece156a3db-kube-api-access-9n6sx\") pod \"service-ca-9c57cc56f-psv9p\" (UID: \"d0604709-7352-499d-b30c-69ece156a3db\") " pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.265626 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.268549 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.282145 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ksq\" (UniqueName: \"kubernetes.io/projected/0cc57061-f6f9-4f93-bca5-a0ff5276ac0f-kube-api-access-v2ksq\") pod \"service-ca-operator-777779d784-w4c85\" (UID: \"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.302573 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldm9\" (UniqueName: \"kubernetes.io/projected/bb59cb0f-f03a-4d3d-b507-a40be22a4af3-kube-api-access-8ldm9\") pod \"machine-config-server-dtsx4\" (UID: \"bb59cb0f-f03a-4d3d-b507-a40be22a4af3\") " pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.310711 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.320172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8h8v\" (UniqueName: \"kubernetes.io/projected/62b52056-10f9-4150-ae0f-b443b165074d-kube-api-access-c8h8v\") pod \"csi-hostpathplugin-twpnr\" (UID: \"62b52056-10f9-4150-ae0f-b443b165074d\") " pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.329262 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.334251 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6n7j\" (UniqueName: \"kubernetes.io/projected/60d041e7-e6d2-441c-8b41-e908873c0d41-kube-api-access-b6n7j\") pod \"marketplace-operator-79b997595-bff4w\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.336721 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.337141 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.837102886 +0000 UTC m=+153.958635438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.338099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.338810 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.838791038 +0000 UTC m=+153.960323600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.351583 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.367531 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.377206 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.377626 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqfx\" (UniqueName: \"kubernetes.io/projected/5c49062f-5332-4b31-ac46-233bc7afd0c8-kube-api-access-wqqfx\") pod \"package-server-manager-789f6589d5-rh865\" (UID: \"5c49062f-5332-4b31-ac46-233bc7afd0c8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.402210 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" event={"ID":"11648315-9879-46e9-9aa1-78340652ebfd","Type":"ContainerStarted","Data":"671cb3877358369708764fd371f6aecb36951b71b68908837b3e161722ad9328"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.406272 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmgwk\" (UniqueName: \"kubernetes.io/projected/38d4a7a2-512f-4077-8126-23fb08980b61-kube-api-access-pmgwk\") pod \"ingress-canary-k4gzk\" (UID: \"38d4a7a2-512f-4077-8126-23fb08980b61\") " pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.410556 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ww5r\" (UniqueName: \"kubernetes.io/projected/4b0cf0a0-036d-4e79-836e-208aa70df688-kube-api-access-6ww5r\") pod \"control-plane-machine-set-operator-78cbb6b69f-tl29t\" (UID: \"4b0cf0a0-036d-4e79-836e-208aa70df688\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.415863 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dtsx4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.440368 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.440883 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:39.940846506 +0000 UTC m=+154.062379058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.442817 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cfhwx" event={"ID":"8c6f1a5d-0119-4508-a7cd-5cb0eb522d01","Type":"ContainerStarted","Data":"c78032439dab56046b9ae7c653f2947f684b4e5cc8e26f18a5bdc65979603ba4"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.442992 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.446450 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.461558 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mg9wr"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.463233 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.464837 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.506168 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" event={"ID":"149c997e-8119-469c-afd2-4b0bc403e07a","Type":"ContainerStarted","Data":"b8eb0ee0c3fcecae248369620ee452c2a5ed309376bc9e3cdad823811a30de81"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.519225 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.532391 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.544589 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" event={"ID":"d8060832-7d55-42ed-8499-b42b0b3fdc8b","Type":"ContainerStarted","Data":"1fb4d5b8037bc54852ec1265f33004f582079af29b876ef3a65f2c90b22c9296"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.544634 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" event={"ID":"d8060832-7d55-42ed-8499-b42b0b3fdc8b","Type":"ContainerStarted","Data":"db2c2e9db1064dba470335d117f637826650a297148238f0c2a4bf521f508088"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.554623 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.555051 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.055035514 +0000 UTC m=+154.176568066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.571886 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-g69x5"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.598220 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" event={"ID":"45113c1a-4545-4ec0-a1f7-f387bf548d6f","Type":"ContainerStarted","Data":"c4e9acb40ccb409eaacde9b69767ff196ca82507ed58bbf0c558a4bdf290028d"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.598969 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.626598 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" event={"ID":"0d221639-40c0-4997-a42c-2e641f5793ab","Type":"ContainerStarted","Data":"b7e19a1ea6bae373024daed6026bf0d8337065bd5db6b1c2395ce8c6462786d8"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.653858 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.656506 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.661788 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.161736166 +0000 UTC m=+154.283268838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.665036 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.690978 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.693034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5tm89" event={"ID":"fb945dc3-fcd0-43b6-a048-82682523848b","Type":"ContainerStarted","Data":"491157d371e3a2b8d20a0ab4378873659d6fb2c347f685e1427eae6b372f88d6"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.704308 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k4gzk" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.706852 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" event={"ID":"b29d96e3-7aa4-4626-a245-93ee36f7595f","Type":"ContainerStarted","Data":"e2faf24832821f2cf146abe3ce3d80de675012e14133380a31d222a381de7464"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.733784 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" event={"ID":"b4781bb1-96ff-48a7-8429-8831b12c9f3b","Type":"ContainerStarted","Data":"580cce232cdd120ab835fb3566f287eb076652a69aadd8ac356fa4fabed5e23d"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.755701 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz"] Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.758733 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.759068 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.259056297 +0000 UTC m=+154.380588849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.796044 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" event={"ID":"cbdb2624-b774-4a00-980e-55364d66743e","Type":"ContainerStarted","Data":"88a9d86037745f383b6900df41fbd91ac19f50faf45b8724f6a5481a6133b800"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.797086 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.799687 4904 patch_prober.go:28] interesting pod/console-operator-58897d9998-fb4wt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.799762 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" podUID="cbdb2624-b774-4a00-980e-55364d66743e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.813332 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" event={"ID":"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8","Type":"ContainerStarted","Data":"284a814f6fe0339f1f75e8c23ee3a6eacc7527720287541eefe148940fc2eb8d"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.813385 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" event={"ID":"2b5a0949-d11a-49fb-b3ac-023c6ac6abd8","Type":"ContainerStarted","Data":"a47e3f238616273dcc96cb925b298ffc49a61814806dcada9f2c8cbac2afa1ab"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.861729 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.863284 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.363250827 +0000 UTC m=+154.484783389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.865912 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" event={"ID":"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90","Type":"ContainerStarted","Data":"f7b119180bd4e2dfff34d9af12cce20617c32f02d06fb374a60ad7a8d65898aa"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.881826 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" podStartSLOduration=127.881808336 podStartE2EDuration="2m7.881808336s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:39.881353055 +0000 UTC m=+154.002885607" watchObservedRunningTime="2025-11-21 13:34:39.881808336 +0000 UTC m=+154.003340888" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.884636 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" event={"ID":"cd7b70c9-69ff-4c9b-8ec9-da135b66a909","Type":"ContainerStarted","Data":"1ea6c410f84795920ff58c16aad1e23e677ee61125274494f8ab051fa5664df4"} Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.900500 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.902935 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.903012 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.919513 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.948252 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.964619 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:39 crc kubenswrapper[4904]: E1121 13:34:39.969034 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.469016517 +0000 UTC m=+154.590549069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:39 crc kubenswrapper[4904]: I1121 13:34:39.997362 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.000521 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-n54bt"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.007867 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.067165 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.067873 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.567826213 +0000 UTC m=+154.689358765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.117898 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" podStartSLOduration=127.117875513 podStartE2EDuration="2m7.117875513s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:40.107544388 +0000 UTC m=+154.229076940" watchObservedRunningTime="2025-11-21 13:34:40.117875513 +0000 UTC m=+154.239408075" Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.173516 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.173874 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.67386115 +0000 UTC m=+154.795393702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.317897 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.318131 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.818070861 +0000 UTC m=+154.939603413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.320276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.321107 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.821058775 +0000 UTC m=+154.942591327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.353204 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.380719 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.422371 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.423051 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:40.923030151 +0000 UTC m=+155.044562703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.552116 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.552867 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.052847216 +0000 UTC m=+155.174379768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.594130 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psv9p"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.654471 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.656066 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.656351 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.156313349 +0000 UTC m=+155.277845911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.672403 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx"] Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.757432 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.758271 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.258257013 +0000 UTC m=+155.379789565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.859305 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.859624 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.359608554 +0000 UTC m=+155.481141106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.913171 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:40 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:40 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:40 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.913205 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.921098 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" event={"ID":"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90","Type":"ContainerStarted","Data":"bdfaa2b070c75d4c025363e3a6cd51430d4d6c7fdf353b5d4daf387c810d99f5"} Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.924346 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.924443 4904 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gqj7v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.924473 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" podUID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.949554 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5tm89" podStartSLOduration=127.949537431 podStartE2EDuration="2m7.949537431s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:40.948200428 +0000 UTC m=+155.069732980" watchObservedRunningTime="2025-11-21 13:34:40.949537431 +0000 UTC m=+155.071069983" Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.960771 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:40 crc kubenswrapper[4904]: E1121 13:34:40.961074 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.461063036 +0000 UTC m=+155.582595588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.968342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" event={"ID":"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33","Type":"ContainerStarted","Data":"c178dcac0f082eab7dab58e567b8d891b10e6c9b67c7f38673677566e4dd3a38"} Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.970777 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" event={"ID":"40a7951a-a4f5-415c-8823-ca5d7428b792","Type":"ContainerStarted","Data":"007557614b35fbe17b2fd46d1342ba009d485e118c40000252fcf3600a15ceb4"} Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.976941 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" event={"ID":"6f672647-bac8-475b-879e-3f67cfe10017","Type":"ContainerStarted","Data":"19ca6c2c7bc96ba731084fcc2f8dc7fb012b9871bcfdfd50af79aa293d7087ed"} Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.982087 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g69x5" event={"ID":"dff8d615-db53-4198-adbd-b6f5fc1cd2be","Type":"ContainerStarted","Data":"dda35c1d9f542f037a3111dbbf291e6ea910f45902316083c1a3d985e4b160ff"} Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.991583 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" event={"ID":"431b2824-783e-4112-a269-dbb3514adc8b","Type":"ContainerStarted","Data":"119c9b348a55470d5c33f5b0c4360f3dce78f7896e098f70f143d7f28e93314d"} Nov 21 13:34:40 crc kubenswrapper[4904]: I1121 13:34:40.993794 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" event={"ID":"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d","Type":"ContainerStarted","Data":"89e2de89ba62d8e437d8264e54535d88ed19c62888fd18b5b74d51e4ec12d65e"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.008411 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vc2gr" podStartSLOduration=129.008396109 podStartE2EDuration="2m9.008396109s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:40.981338349 +0000 UTC m=+155.102870901" watchObservedRunningTime="2025-11-21 13:34:41.008396109 +0000 UTC m=+155.129928661" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.011783 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-twpnr"] Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.015579 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5tm89" event={"ID":"fb945dc3-fcd0-43b6-a048-82682523848b","Type":"ContainerStarted","Data":"314bdd73b5d9ee6db4cb3ae47f27c9d8617033c52cac2a493661488e6f34ae6e"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.036003 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" event={"ID":"5cbe6fd6-8503-46d7-bf62-fa55adda1648","Type":"ContainerStarted","Data":"8010441e60c4526653e1c97aa1997ae8ea1300cdb31e4875f49cd44f67c76d34"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.047088 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" event={"ID":"149c997e-8119-469c-afd2-4b0bc403e07a","Type":"ContainerStarted","Data":"159ee89abf5bb305c5999628483fb33e313c74af62e8530cdbf1ecf6b9386f88"} Nov 21 13:34:41 crc kubenswrapper[4904]: W1121 13:34:41.060200 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19a8a38_94ec_4377_8f90_24f34f5fb547.slice/crio-7ecbf7725fb10ddbfae3d010a6b75efe31a1ef2cb4cfc85f284dc9d11e2970d9 WatchSource:0}: Error finding container 7ecbf7725fb10ddbfae3d010a6b75efe31a1ef2cb4cfc85f284dc9d11e2970d9: Status 404 returned error can't find the container with id 7ecbf7725fb10ddbfae3d010a6b75efe31a1ef2cb4cfc85f284dc9d11e2970d9 Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.061194 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" podStartSLOduration=129.061173676 podStartE2EDuration="2m9.061173676s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:41.060052518 +0000 UTC m=+155.181585070" watchObservedRunningTime="2025-11-21 13:34:41.061173676 +0000 UTC m=+155.182706238" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.061583 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.062142 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" podStartSLOduration=129.062135879 podStartE2EDuration="2m9.062135879s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:41.036265229 +0000 UTC m=+155.157797771" watchObservedRunningTime="2025-11-21 13:34:41.062135879 +0000 UTC m=+155.183668431" Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.067840 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.567811681 +0000 UTC m=+155.689344233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.062640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" event={"ID":"bf8fb172-370c-4941-bf58-ce44f243ae48","Type":"ContainerStarted","Data":"49d10e98cb01807cbbf3962e8cd12e610e65766bf6d2f96a648f35fbbfd30ac9"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.095486 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865"] Nov 21 13:34:41 crc kubenswrapper[4904]: W1121 13:34:41.124911 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb59cb0f_f03a_4d3d_b507_a40be22a4af3.slice/crio-3dc4d6c515fe40e12fec581d0b840339fef1461192ebe9c456708d4c51eef867 WatchSource:0}: Error finding container 3dc4d6c515fe40e12fec581d0b840339fef1461192ebe9c456708d4c51eef867: Status 404 returned error can't find the container with id 3dc4d6c515fe40e12fec581d0b840339fef1461192ebe9c456708d4c51eef867 Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.133061 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" event={"ID":"cbdb2624-b774-4a00-980e-55364d66743e","Type":"ContainerStarted","Data":"ce1e08c70bed13afa9971e34d1475692e998bd882bdd698cc9752139e642082c"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.144103 4904 patch_prober.go:28] interesting pod/console-operator-58897d9998-fb4wt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.144153 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" podUID="cbdb2624-b774-4a00-980e-55364d66743e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.162453 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p6p2w" podStartSLOduration=129.162436954 podStartE2EDuration="2m9.162436954s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:41.161318816 +0000 UTC m=+155.282851358" watchObservedRunningTime="2025-11-21 13:34:41.162436954 +0000 UTC m=+155.283969506" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.163834 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.164055 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.664043693 +0000 UTC m=+155.785576245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.167762 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" event={"ID":"3c586f4f-c286-4408-b2a1-6041ed6c435e","Type":"ContainerStarted","Data":"e44f076d864d7e8f24bba91ce9747fd7a893e0c7afe704374c3545057459edbc"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.220204 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" event={"ID":"0d221639-40c0-4997-a42c-2e641f5793ab","Type":"ContainerStarted","Data":"62d1b8f79ffb7fc2ea17257aaba70415413be45a13062cb4df1371a80181e4c0"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.261868 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" podStartSLOduration=128.261845016 podStartE2EDuration="2m8.261845016s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:41.240301732 +0000 UTC m=+155.361834294" watchObservedRunningTime="2025-11-21 13:34:41.261845016 +0000 UTC m=+155.383377568" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.265626 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" event={"ID":"cd7b70c9-69ff-4c9b-8ec9-da135b66a909","Type":"ContainerStarted","Data":"2d727edbd1b5f4c5f0039050f14cd52796c15d4364d7fb0362ea690d2807eb62"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.289275 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.294059 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.794030723 +0000 UTC m=+155.915563275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.312772 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" podStartSLOduration=129.312748036 podStartE2EDuration="2m9.312748036s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:41.293195962 +0000 UTC m=+155.414728524" watchObservedRunningTime="2025-11-21 13:34:41.312748036 +0000 UTC m=+155.434280588" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.320884 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" event={"ID":"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d","Type":"ContainerStarted","Data":"56ae18a62afee230734d14dc58255b22efda40a13015c9efc257232a2b25f02a"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.349290 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" event={"ID":"11648315-9879-46e9-9aa1-78340652ebfd","Type":"ContainerStarted","Data":"a4bf3240d1102168e65fd1548733e31d8cec3ffebf064710327415b777153edb"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.393794 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.394190 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.894176574 +0000 UTC m=+156.015709126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: W1121 13:34:41.446766 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62b52056_10f9_4150_ae0f_b443b165074d.slice/crio-38f6b58273521c1dabc9faf14f6eae406e40718d70f6312099871ece7f64b5be WatchSource:0}: Error finding container 38f6b58273521c1dabc9faf14f6eae406e40718d70f6312099871ece7f64b5be: Status 404 returned error can't find the container with id 38f6b58273521c1dabc9faf14f6eae406e40718d70f6312099871ece7f64b5be Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.457816 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" event={"ID":"d4151cb5-63cb-45ad-b192-cc67dbea2add","Type":"ContainerStarted","Data":"e70b822b0556253ca6acc19bfedd2fbb7abd6ef9dc10a2b350f5eea3f71fa495"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.495194 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.496725 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:41.996706663 +0000 UTC m=+156.118239215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.544822 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cfhwx" event={"ID":"8c6f1a5d-0119-4508-a7cd-5cb0eb522d01","Type":"ContainerStarted","Data":"39012edb17268718d1614282c23312697e8407a71108e6e6138c1117022c88f1"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.548949 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.551697 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-cfhwx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.551734 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cfhwx" podUID="8c6f1a5d-0119-4508-a7cd-5cb0eb522d01" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.558040 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t"] Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.587849 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" event={"ID":"e44ff011-5019-4148-a16a-85c08b435a70","Type":"ContainerStarted","Data":"747a7a99b61e06b51d51ee321b774345a9cc9de974d571971281682b50a4513e"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.596801 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.597227 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.097210092 +0000 UTC m=+156.218742644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.653993 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" event={"ID":"b29d96e3-7aa4-4626-a245-93ee36f7595f","Type":"ContainerStarted","Data":"b649399e88ea861a3cf625d56ec420bc7e985482ce4b97b8b97ebdb4307e59f2"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.663883 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bff4w"] Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.728315 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.728918 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.228895643 +0000 UTC m=+156.350428185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.734301 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" event={"ID":"b94e951c-fc0f-4225-89c3-d902f69a9ed6","Type":"ContainerStarted","Data":"f3c0afce9a2518c9e82eaf5728636af48d40e4fe39e47825362a8c86ecc6585a"} Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.767069 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5b75z" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.830738 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.831254 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.331238818 +0000 UTC m=+156.452771370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.926999 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:41 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:41 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:41 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.927790 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.934206 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:41 crc kubenswrapper[4904]: E1121 13:34:41.936268 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.436239268 +0000 UTC m=+156.557771840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:41 crc kubenswrapper[4904]: I1121 13:34:41.996799 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k4gzk"] Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.014105 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-w4c85"] Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.022907 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wxjkg"] Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.022959 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg"] Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.038602 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.038969 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.538955192 +0000 UTC m=+156.660487744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: W1121 13:34:42.071354 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b3cdae8_1188_4f28_b434_80ce839aa050.slice/crio-4125e296baa1b3124725b6c0264d0262a9d0b905a5e245f48dcd4b7e538db922 WatchSource:0}: Error finding container 4125e296baa1b3124725b6c0264d0262a9d0b905a5e245f48dcd4b7e538db922: Status 404 returned error can't find the container with id 4125e296baa1b3124725b6c0264d0262a9d0b905a5e245f48dcd4b7e538db922 Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.103956 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4"] Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.140617 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.144995 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.644955638 +0000 UTC m=+156.766488190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.230241 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6rfll" podStartSLOduration=129.230212319 podStartE2EDuration="2m9.230212319s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:42.16844599 +0000 UTC m=+156.289978542" watchObservedRunningTime="2025-11-21 13:34:42.230212319 +0000 UTC m=+156.351744871" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.249449 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.249926 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.749911497 +0000 UTC m=+156.871444049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.263041 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cfhwx" podStartSLOduration=129.263014051 podStartE2EDuration="2m9.263014051s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:42.261911644 +0000 UTC m=+156.383444206" watchObservedRunningTime="2025-11-21 13:34:42.263014051 +0000 UTC m=+156.384546613" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.350574 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.351201 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.851176125 +0000 UTC m=+156.972708677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.453200 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.453920 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:42.953894648 +0000 UTC m=+157.075427190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.555360 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.555924 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.055902155 +0000 UTC m=+157.177434707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.657254 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.657919 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.157901191 +0000 UTC m=+157.279433743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.718416 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.718490 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.761356 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.761596 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.261571709 +0000 UTC m=+157.383104261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.761749 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.762096 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.262084112 +0000 UTC m=+157.383616664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.763279 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wxjkg" event={"ID":"a3a0f1f8-7ed7-451e-a940-6de0c3e53829","Type":"ContainerStarted","Data":"07250da2eabbc5c54eefa43bb22aa358cbe362ee6e34372072fbc4bef4d97fc6"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.818964 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" event={"ID":"6599d2b5-6e0e-4d90-a7ee-f4b804ee185d","Type":"ContainerStarted","Data":"36183c5a0f0690dc6c6c22e5e1d69929a2990a3a31b7e01af7152c243d6c10c2"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.842748 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" event={"ID":"6f672647-bac8-475b-879e-3f67cfe10017","Type":"ContainerStarted","Data":"2c3184593a854c6cb97978245f9d8965d68e36b17c94563a465bcc21971c4878"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.851479 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tfzhv" podStartSLOduration=129.851465125 podStartE2EDuration="2m9.851465125s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:42.844114693 +0000 UTC m=+156.965647245" watchObservedRunningTime="2025-11-21 13:34:42.851465125 +0000 UTC m=+156.972997677" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.853637 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k4gzk" event={"ID":"38d4a7a2-512f-4077-8126-23fb08980b61","Type":"ContainerStarted","Data":"98ba269e53a0d1f3c96b672cb1fd5b16bf4a1ccfd6cb5190bb15d0f3f8857b22"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.862917 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.863508 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.363476403 +0000 UTC m=+157.485009135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.870082 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" event={"ID":"60d041e7-e6d2-441c-8b41-e908873c0d41","Type":"ContainerStarted","Data":"b7843149988f4def1c3052aaabcd8a8477dce10028c8e80e3ce94a682f256b32"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.887955 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" event={"ID":"62b52056-10f9-4150-ae0f-b443b165074d","Type":"ContainerStarted","Data":"38f6b58273521c1dabc9faf14f6eae406e40718d70f6312099871ece7f64b5be"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.906865 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:42 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:42 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:42 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.906960 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.953024 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" event={"ID":"431b2824-783e-4112-a269-dbb3514adc8b","Type":"ContainerStarted","Data":"f0cbef9bb55db4a40a43f06b56e9444b7089f0e74c4f89d1d4bba28306b7c2a0"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.955633 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.958309 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" event={"ID":"6b3cdae8-1188-4f28-b434-80ce839aa050","Type":"ContainerStarted","Data":"4125e296baa1b3124725b6c0264d0262a9d0b905a5e245f48dcd4b7e538db922"} Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.966307 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:42 crc kubenswrapper[4904]: E1121 13:34:42.969094 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.469072518 +0000 UTC m=+157.590605070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.970890 4904 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qvbx4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 21 13:34:42 crc kubenswrapper[4904]: I1121 13:34:42.970976 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" podUID="431b2824-783e-4112-a269-dbb3514adc8b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.006160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" event={"ID":"b29d96e3-7aa4-4626-a245-93ee36f7595f","Type":"ContainerStarted","Data":"5deb51d59ceae7bfaa7f70632321c3af45a32d07319f1d5b0dbc1eb9e4cf5941"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.021180 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" podStartSLOduration=130.021154939 podStartE2EDuration="2m10.021154939s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.021118038 +0000 UTC m=+157.142650590" watchObservedRunningTime="2025-11-21 13:34:43.021154939 +0000 UTC m=+157.142687491" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.075689 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" event={"ID":"d0604709-7352-499d-b30c-69ece156a3db","Type":"ContainerStarted","Data":"a0669428d877f59a72c3547f2d7b5df9add8f90e8309b9c93107207eed60f141"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.075743 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" event={"ID":"d0604709-7352-499d-b30c-69ece156a3db","Type":"ContainerStarted","Data":"6bf662678dbe251b746f6687fd45f206c99b27bb2b0e67c3042c979e0a3fe96b"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.076686 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.077048 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.577006642 +0000 UTC m=+157.698539194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.080681 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" event={"ID":"d4151cb5-63cb-45ad-b192-cc67dbea2add","Type":"ContainerStarted","Data":"2804dc7f236f47cea6d5c9400151fef3e3f7b8e7f40a59e80b06ac487c745de5"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.083634 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" event={"ID":"40a7951a-a4f5-415c-8823-ca5d7428b792","Type":"ContainerStarted","Data":"4328d2acb3921327b088baf3b4381c55c5e2ec005ce60f61d0387a30b6df751c"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.084507 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.100364 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dtsx4" event={"ID":"bb59cb0f-f03a-4d3d-b507-a40be22a4af3","Type":"ContainerStarted","Data":"73babf3081d5c451ccecb6e52c5ba38028244493c1ef2f552127df754411ff7d"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.100429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dtsx4" event={"ID":"bb59cb0f-f03a-4d3d-b507-a40be22a4af3","Type":"ContainerStarted","Data":"3dc4d6c515fe40e12fec581d0b840339fef1461192ebe9c456708d4c51eef867"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.130851 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" event={"ID":"cd7b70c9-69ff-4c9b-8ec9-da135b66a909","Type":"ContainerStarted","Data":"baab7b461e4d33378bc242d2662853554c3c1be16bad3dbc8652939032dd2957"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.157156 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" event={"ID":"b4781bb1-96ff-48a7-8429-8831b12c9f3b","Type":"ContainerStarted","Data":"cf58137ae5997a362cd0a3bbf87311f3d7e2128bf217736cc99575c62fc96cdb"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.167268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" event={"ID":"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f","Type":"ContainerStarted","Data":"69ddd649f66a15fccbb7524a8abc454e23eab5a67ae736f990532ddf3e921378"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.176411 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" event={"ID":"0d221639-40c0-4997-a42c-2e641f5793ab","Type":"ContainerStarted","Data":"74d42e02c2447132ec9a39c1296ef5216116d64f9f490fcf5b5e02797f7a5172"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.179292 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.180329 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.68031125 +0000 UTC m=+157.801843802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.210823 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" event={"ID":"e44ff011-5019-4148-a16a-85c08b435a70","Type":"ContainerStarted","Data":"4e68e3f85de8c3e6d5530463d33abc7f4cdb1cac2263ad8cebc373770857a764"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.233846 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" event={"ID":"4b0cf0a0-036d-4e79-836e-208aa70df688","Type":"ContainerStarted","Data":"f933f6a93e66044d680db16bfb7a84b657d0d9cba59640a5851be902c61b415c"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.277823 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdh9s" podStartSLOduration=130.277781834 podStartE2EDuration="2m10.277781834s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.151010194 +0000 UTC m=+157.272542746" watchObservedRunningTime="2025-11-21 13:34:43.277781834 +0000 UTC m=+157.399314386" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.278785 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" podStartSLOduration=130.278778549 podStartE2EDuration="2m10.278778549s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.274252036 +0000 UTC m=+157.395784578" watchObservedRunningTime="2025-11-21 13:34:43.278778549 +0000 UTC m=+157.400311111" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.280956 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.296971 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.796831086 +0000 UTC m=+157.918363638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.310154 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" event={"ID":"eea1fa70-b4b7-4ed1-9f0c-60bfd829bf33","Type":"ContainerStarted","Data":"cef9fde38a73f298ddaf9df3c4abf1f4c16db957468ca829edf301d5c151d3af"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.354202 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" event={"ID":"5c49062f-5332-4b31-ac46-233bc7afd0c8","Type":"ContainerStarted","Data":"e4bbd1e445bc8eed564b235d1ac5d522719edc708b7dab7ef3443661a0790327"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.360452 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dtsx4" podStartSLOduration=7.360417621 podStartE2EDuration="7.360417621s" podCreationTimestamp="2025-11-21 13:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.34669219 +0000 UTC m=+157.468224742" watchObservedRunningTime="2025-11-21 13:34:43.360417621 +0000 UTC m=+157.481950173" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.383514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.385601 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.885585794 +0000 UTC m=+158.007118346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.452496 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" event={"ID":"b94e951c-fc0f-4225-89c3-d902f69a9ed6","Type":"ContainerStarted","Data":"44cd187e1506f005589aa830e09a5171b7f5892279e3752f0bbaddd639a58357"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.488928 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.488959 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-tqksx" podStartSLOduration=131.488929684 podStartE2EDuration="2m11.488929684s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.408557792 +0000 UTC m=+157.530090334" watchObservedRunningTime="2025-11-21 13:34:43.488929684 +0000 UTC m=+157.610462236" Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.490096 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:43.990079852 +0000 UTC m=+158.111612404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.503825 4904 generic.go:334] "Generic (PLEG): container finished" podID="3c586f4f-c286-4408-b2a1-6041ed6c435e" containerID="6c8330496652cd2b513e02b60c892ae3ec15fbaf7dd953f1d40135fb3fa96603" exitCode=0 Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.503905 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" event={"ID":"3c586f4f-c286-4408-b2a1-6041ed6c435e","Type":"ContainerDied","Data":"6c8330496652cd2b513e02b60c892ae3ec15fbaf7dd953f1d40135fb3fa96603"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.539110 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" event={"ID":"5cbe6fd6-8503-46d7-bf62-fa55adda1648","Type":"ContainerStarted","Data":"f6ee3e17f1c1f85560bbbab804fccc770b0b360b7c1f4b0661a6017138f62e55"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.559417 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" event={"ID":"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d","Type":"ContainerStarted","Data":"4c7fcece01c87b43422ae219d321e8a619d5fa03cf52ebb0dfc369c86471b45a"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.559487 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" event={"ID":"b5b8159d-494a-4eb8-bdf9-6facb27ecf0d","Type":"ContainerStarted","Data":"692a7b8175c622943ce37fa1d77f69b401daa323cf094a2341ae0a7e444f93ea"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.584055 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-psv9p" podStartSLOduration=130.584027659 podStartE2EDuration="2m10.584027659s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.502399887 +0000 UTC m=+157.623932439" watchObservedRunningTime="2025-11-21 13:34:43.584027659 +0000 UTC m=+157.705560211" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.592687 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.593338 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.093311538 +0000 UTC m=+158.214844090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.595263 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-n54bt" podStartSLOduration=131.595231757 podStartE2EDuration="2m11.595231757s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.582391388 +0000 UTC m=+157.703923940" watchObservedRunningTime="2025-11-21 13:34:43.595231757 +0000 UTC m=+157.716764319" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.599194 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" event={"ID":"d19a8a38-94ec-4377-8f90-24f34f5fb547","Type":"ContainerStarted","Data":"acf9d917cece611e3714a2947789ed917d1cab98addad2c33f86b497f248379b"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.599260 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" event={"ID":"d19a8a38-94ec-4377-8f90-24f34f5fb547","Type":"ContainerStarted","Data":"7ecbf7725fb10ddbfae3d010a6b75efe31a1ef2cb4cfc85f284dc9d11e2970d9"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.625026 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xc7j6" podStartSLOduration=130.625008584 podStartE2EDuration="2m10.625008584s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.623972378 +0000 UTC m=+157.745504930" watchObservedRunningTime="2025-11-21 13:34:43.625008584 +0000 UTC m=+157.746541136" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.645128 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" event={"ID":"22e8702a-b556-4a2d-a3bc-d27bef18cfb2","Type":"ContainerStarted","Data":"ab6378749a9aab5c94bf74341476b96fc78439cd17e0d03584f8ec5b6c2f9f49"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.698734 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g69x5" event={"ID":"dff8d615-db53-4198-adbd-b6f5fc1cd2be","Type":"ContainerStarted","Data":"0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.700400 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.700858 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.200821501 +0000 UTC m=+158.322354173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.701260 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.701549 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.20154202 +0000 UTC m=+158.323074572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.759194 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" podStartSLOduration=130.759167437 podStartE2EDuration="2m10.759167437s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.752158083 +0000 UTC m=+157.873690645" watchObservedRunningTime="2025-11-21 13:34:43.759167437 +0000 UTC m=+157.880699989" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.786440 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" event={"ID":"61b9b037-b82b-4c71-a535-aaae2a66dee1","Type":"ContainerStarted","Data":"a2553b4905c09a29e400ccdb2ed91ab060a612e451d40154763fbf718268cd15"} Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.786480 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.794764 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-cfhwx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.794823 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cfhwx" podUID="8c6f1a5d-0119-4508-a7cd-5cb0eb522d01" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.795347 4904 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nvtq4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.795396 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" podUID="61b9b037-b82b-4c71-a535-aaae2a66dee1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.800498 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-84l77" podStartSLOduration=130.800459269 podStartE2EDuration="2m10.800459269s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.794705066 +0000 UTC m=+157.916237638" watchObservedRunningTime="2025-11-21 13:34:43.800459269 +0000 UTC m=+157.921991821" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.802315 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.802546 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.302523761 +0000 UTC m=+158.424056313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.802774 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.809949 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.309933464 +0000 UTC m=+158.431466016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.818445 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.826818 4904 patch_prober.go:28] interesting pod/apiserver-76f77b778f-9sdsl container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]log ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]etcd ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/generic-apiserver-start-informers ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/max-in-flight-filter ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 21 13:34:43 crc kubenswrapper[4904]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 21 13:34:43 crc kubenswrapper[4904]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/project.openshift.io-projectcache ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/openshift.io-startinformers ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 21 13:34:43 crc kubenswrapper[4904]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 21 13:34:43 crc kubenswrapper[4904]: livez check failed Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.826942 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" podUID="149c997e-8119-469c-afd2-4b0bc403e07a" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.828278 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vgbh2" podStartSLOduration=130.828232877 podStartE2EDuration="2m10.828232877s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.827495849 +0000 UTC m=+157.949028431" watchObservedRunningTime="2025-11-21 13:34:43.828232877 +0000 UTC m=+157.949765429" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.839565 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fb4wt" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.906770 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" podStartSLOduration=130.906755271 podStartE2EDuration="2m10.906755271s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.905329576 +0000 UTC m=+158.026862138" watchObservedRunningTime="2025-11-21 13:34:43.906755271 +0000 UTC m=+158.028287823" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.924468 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:43 crc kubenswrapper[4904]: E1121 13:34:43.925646 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.425626599 +0000 UTC m=+158.547159151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.929569 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:43 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:43 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:43 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.929701 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:43 crc kubenswrapper[4904]: I1121 13:34:43.959937 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" podStartSLOduration=130.959917738 podStartE2EDuration="2m10.959917738s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:43.958146835 +0000 UTC m=+158.079679407" watchObservedRunningTime="2025-11-21 13:34:43.959917738 +0000 UTC m=+158.081450290" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.030022 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.030477 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.530458735 +0000 UTC m=+158.651991287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.041149 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wwmcx" podStartSLOduration=131.041110459 podStartE2EDuration="2m11.041110459s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.004618085 +0000 UTC m=+158.126150657" watchObservedRunningTime="2025-11-21 13:34:44.041110459 +0000 UTC m=+158.162643011" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.084981 4904 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dzb8c container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.085091 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" podUID="40a7951a-a4f5-415c-8823-ca5d7428b792" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.090827 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-twwpr" podStartSLOduration=131.09080849 podStartE2EDuration="2m11.09080849s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.04477097 +0000 UTC m=+158.166303532" watchObservedRunningTime="2025-11-21 13:34:44.09080849 +0000 UTC m=+158.212341042" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.132839 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" podStartSLOduration=131.132815751 podStartE2EDuration="2m11.132815751s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.131819706 +0000 UTC m=+158.253352258" watchObservedRunningTime="2025-11-21 13:34:44.132815751 +0000 UTC m=+158.254348303" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.133837 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.134483 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.634454031 +0000 UTC m=+158.755986583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.228187 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-g69x5" podStartSLOduration=131.228168502 podStartE2EDuration="2m11.228168502s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.175534088 +0000 UTC m=+158.297066640" watchObservedRunningTime="2025-11-21 13:34:44.228168502 +0000 UTC m=+158.349701054" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.250292 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.261996 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.761972959 +0000 UTC m=+158.883505511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.264332 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" podStartSLOduration=132.264311808 podStartE2EDuration="2m12.264311808s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.263000455 +0000 UTC m=+158.384533007" watchObservedRunningTime="2025-11-21 13:34:44.264311808 +0000 UTC m=+158.385844360" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.352228 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.352822 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.852790529 +0000 UTC m=+158.974323081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.453669 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.453977 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:44.953955854 +0000 UTC m=+159.075488416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.555297 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.555742 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.055706084 +0000 UTC m=+159.177238636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.555848 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.556138 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.056130675 +0000 UTC m=+159.177663227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.657147 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.657346 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.15731642 +0000 UTC m=+159.278848982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.657460 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.657817 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.157803773 +0000 UTC m=+159.279336325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.758910 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.759217 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.259162603 +0000 UTC m=+159.380695145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.759302 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.759875 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.25986063 +0000 UTC m=+159.381393182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.792793 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" event={"ID":"22e8702a-b556-4a2d-a3bc-d27bef18cfb2","Type":"ContainerStarted","Data":"4211956b83b99149c9839f015f56adb2d1433319bc93cff35d6f68a24d0154da"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.792872 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" event={"ID":"22e8702a-b556-4a2d-a3bc-d27bef18cfb2","Type":"ContainerStarted","Data":"8956f8c5b7e524c24785712d0de0bfb44b8b0cd55fad9841f1b6418ece281c27"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.793980 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" event={"ID":"bf8fb172-370c-4941-bf58-ce44f243ae48","Type":"ContainerStarted","Data":"e20adca02bc5653dbf86ed35f7491006fff74d5a4f23b3dda7eefd483bf54d11"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.795402 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" event={"ID":"5c49062f-5332-4b31-ac46-233bc7afd0c8","Type":"ContainerStarted","Data":"b8028b9e346f4af0e45be193a122217bfdd2214feeb95ba418e8d58a1dca0a3e"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.795463 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" event={"ID":"5c49062f-5332-4b31-ac46-233bc7afd0c8","Type":"ContainerStarted","Data":"a269461f54403fd654a838d6ca2d4f979039152d41498f05568da3d6f259ee7d"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.795516 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.797255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wxjkg" event={"ID":"a3a0f1f8-7ed7-451e-a940-6de0c3e53829","Type":"ContainerStarted","Data":"e8b252a0d177da0555c1d13ffaf809b6898c55458e71b2bc7bf4c958ce7a559c"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.797320 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wxjkg" event={"ID":"a3a0f1f8-7ed7-451e-a940-6de0c3e53829","Type":"ContainerStarted","Data":"aba9e239d80b4fe936237e46d314ea195c6a8c19b7fb65a0205bad8b95e7bcb8"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.797386 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.799028 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" event={"ID":"3c586f4f-c286-4408-b2a1-6041ed6c435e","Type":"ContainerStarted","Data":"ce533d425f39a6b7cd05497d561d6884394ce31a781dcf91ed7f64fe4ee963a1"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.800633 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" event={"ID":"b94e951c-fc0f-4225-89c3-d902f69a9ed6","Type":"ContainerStarted","Data":"f1441676792b440107001453b5580cd90c53e950d74561900ae52c2f7097f853"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.801825 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k4gzk" event={"ID":"38d4a7a2-512f-4077-8126-23fb08980b61","Type":"ContainerStarted","Data":"7a0f48bf4a81049b66036518ac2408f5ed187a8a139e710fc451f5a22c53102c"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.814985 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" event={"ID":"60d041e7-e6d2-441c-8b41-e908873c0d41","Type":"ContainerStarted","Data":"416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.815397 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.817215 4904 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bff4w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.817258 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.817975 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tfgxk" event={"ID":"5cbe6fd6-8503-46d7-bf62-fa55adda1648","Type":"ContainerStarted","Data":"08608b4bed10807b892709ff63d4d87d5114a16de6d11e6c38bd06cf523fc8b9"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.819681 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-tl29t" event={"ID":"4b0cf0a0-036d-4e79-836e-208aa70df688","Type":"ContainerStarted","Data":"6f29e4e49b165e257c61dac51a293b27c5a6833b5a8fc218d622e07505555ddf"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.821756 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-w4c85" event={"ID":"0cc57061-f6f9-4f93-bca5-a0ff5276ac0f","Type":"ContainerStarted","Data":"9baab8a0431dd430b2504dbe9bf05b6d0d281ba770f53071f1b960fef3aafd65"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.823250 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" event={"ID":"6f672647-bac8-475b-879e-3f67cfe10017","Type":"ContainerStarted","Data":"8a901acfd10743e738d8dd62ea534d3c788afa8d5a2c1068910e6cb3ca815448"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.824606 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j2pbx" podStartSLOduration=131.824592644 podStartE2EDuration="2m11.824592644s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.821151658 +0000 UTC m=+158.942684210" watchObservedRunningTime="2025-11-21 13:34:44.824592644 +0000 UTC m=+158.946125196" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.826589 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" event={"ID":"62b52056-10f9-4150-ae0f-b443b165074d","Type":"ContainerStarted","Data":"865b32864b97b91c7428912472498278da932698704cd4c4459af1bb9d895c55"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.828050 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" event={"ID":"6b3cdae8-1188-4f28-b434-80ce839aa050","Type":"ContainerStarted","Data":"5fb126bd16357dcf159a44f7e4f457f33707d206f2b640c71f11a5ad5770cf15"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.834140 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" event={"ID":"61b9b037-b82b-4c71-a535-aaae2a66dee1","Type":"ContainerStarted","Data":"e424f3c2c3f0ca855476d632b4013b1452daa9022a4f25aabfc077e9ad64a783"} Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.835941 4904 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nvtq4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.835991 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-cfhwx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.836003 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" podUID="61b9b037-b82b-4c71-a535-aaae2a66dee1" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.836019 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cfhwx" podUID="8c6f1a5d-0119-4508-a7cd-5cb0eb522d01" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.840794 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dzb8c" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.856745 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvbx4" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.860515 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.860707 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.360682007 +0000 UTC m=+159.482214559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.871338 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mg9wr" podStartSLOduration=131.871312861 podStartE2EDuration="2m11.871312861s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.864348348 +0000 UTC m=+158.985880900" watchObservedRunningTime="2025-11-21 13:34:44.871312861 +0000 UTC m=+158.992845413" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.874076 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.874607 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.374588871 +0000 UTC m=+159.496121423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.905888 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:44 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:44 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:44 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.905977 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.907711 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" podStartSLOduration=131.907689982 podStartE2EDuration="2m11.907689982s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.905192469 +0000 UTC m=+159.026725021" watchObservedRunningTime="2025-11-21 13:34:44.907689982 +0000 UTC m=+159.029222544" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.939466 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-rnsg8" podStartSLOduration=131.939438328 podStartE2EDuration="2m11.939438328s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.937829799 +0000 UTC m=+159.059362351" watchObservedRunningTime="2025-11-21 13:34:44.939438328 +0000 UTC m=+159.060970880" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.966969 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wxjkg" podStartSLOduration=8.966942619 podStartE2EDuration="8.966942619s" podCreationTimestamp="2025-11-21 13:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:44.966714463 +0000 UTC m=+159.088247015" watchObservedRunningTime="2025-11-21 13:34:44.966942619 +0000 UTC m=+159.088475171" Nov 21 13:34:44 crc kubenswrapper[4904]: I1121 13:34:44.984321 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:44 crc kubenswrapper[4904]: E1121 13:34:44.985377 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.485343944 +0000 UTC m=+159.606876496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.008774 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" podStartSLOduration=132.008743844 podStartE2EDuration="2m12.008743844s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:45.002434298 +0000 UTC m=+159.123966880" watchObservedRunningTime="2025-11-21 13:34:45.008743844 +0000 UTC m=+159.130276396" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.057494 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" podStartSLOduration=132.057467481 podStartE2EDuration="2m12.057467481s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:45.034643376 +0000 UTC m=+159.156175938" watchObservedRunningTime="2025-11-21 13:34:45.057467481 +0000 UTC m=+159.179000033" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.060500 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k4gzk" podStartSLOduration=9.060488705 podStartE2EDuration="9.060488705s" podCreationTimestamp="2025-11-21 13:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:45.057968533 +0000 UTC m=+159.179501085" watchObservedRunningTime="2025-11-21 13:34:45.060488705 +0000 UTC m=+159.182021257" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.086410 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.087126 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.587098444 +0000 UTC m=+159.708630996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.112535 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-59smg" podStartSLOduration=132.112519004 podStartE2EDuration="2m12.112519004s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:45.111605682 +0000 UTC m=+159.233138254" watchObservedRunningTime="2025-11-21 13:34:45.112519004 +0000 UTC m=+159.234051556" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.188821 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.189102 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.689087161 +0000 UTC m=+159.810619713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.191039 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8l6wj" podStartSLOduration=132.191019828 podStartE2EDuration="2m12.191019828s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:45.189357617 +0000 UTC m=+159.310890169" watchObservedRunningTime="2025-11-21 13:34:45.191019828 +0000 UTC m=+159.312552380" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.290809 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.291274 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.791256971 +0000 UTC m=+159.912789523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.391533 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.391758 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.891711149 +0000 UTC m=+160.013243701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.391797 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.392387 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.892364045 +0000 UTC m=+160.013896597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.492826 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.493033 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:45.992986487 +0000 UTC m=+160.114519039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.594742 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.595064 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.095048125 +0000 UTC m=+160.216580677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.696690 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.696891 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.196856386 +0000 UTC m=+160.318388938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.697057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.697381 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.197373539 +0000 UTC m=+160.318906091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.798733 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.798970 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.298931105 +0000 UTC m=+160.420463657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.799292 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.799769 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.299751724 +0000 UTC m=+160.421284276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.854427 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" event={"ID":"62b52056-10f9-4150-ae0f-b443b165074d","Type":"ContainerStarted","Data":"76758ef6a860d387823d72a2ff6c91fd941bce67090a410a8a3a5cf7efefd44a"} Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.855420 4904 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bff4w container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.855465 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.859088 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nvtq4" Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.900302 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.900538 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.400510509 +0000 UTC m=+160.522043061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.900900 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:45 crc kubenswrapper[4904]: E1121 13:34:45.902121 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.402111919 +0000 UTC m=+160.523644471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.913058 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:45 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:45 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:45 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:45 crc kubenswrapper[4904]: I1121 13:34:45.913166 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.002663 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.002899 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.502847864 +0000 UTC m=+160.624380416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.002999 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.003411 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.503393418 +0000 UTC m=+160.624925970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.104072 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.104344 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.604306217 +0000 UTC m=+160.725838769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.104451 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.104871 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.60485415 +0000 UTC m=+160.726386702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.107693 4904 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.206438 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.206810 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.706787405 +0000 UTC m=+160.828319957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.308426 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.308771 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.808758451 +0000 UTC m=+160.930291003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.409242 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.409617 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.909568638 +0000 UTC m=+161.031101200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.409804 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.410134 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:46.910120511 +0000 UTC m=+161.031653063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.511632 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.511865 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.011826891 +0000 UTC m=+161.133359453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.512577 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.513047 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.01302449 +0000 UTC m=+161.134557042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.614432 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.614678 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.114628946 +0000 UTC m=+161.236161498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.614839 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.615263 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.115244082 +0000 UTC m=+161.236776634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.715697 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.715818 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.215791911 +0000 UTC m=+161.337324463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.715941 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.716350 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.216339255 +0000 UTC m=+161.337871817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.807002 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jt2rv"] Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.808217 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.810351 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.819114 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.819253 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.319222893 +0000 UTC m=+161.440755455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.819465 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:46 crc kubenswrapper[4904]: E1121 13:34:46.819819 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-21 13:34:47.319806368 +0000 UTC m=+161.441338920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g96sr" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.819923 4904 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-21T13:34:46.107731912Z","Handler":null,"Name":""} Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.820980 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt2rv"] Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.826875 4904 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.826927 4904 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.864335 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" event={"ID":"62b52056-10f9-4150-ae0f-b443b165074d","Type":"ContainerStarted","Data":"44de3d30346a8a3fdd033f5bdb832651ee51b32450b024b7884749816997d606"} Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.864410 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" event={"ID":"62b52056-10f9-4150-ae0f-b443b165074d","Type":"ContainerStarted","Data":"3e3a96643d9699b7ef02b2f957339efba2ab41d061c1871e34c0bd0a7621ad28"} Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.882234 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" podStartSLOduration=10.882209013 podStartE2EDuration="10.882209013s" podCreationTimestamp="2025-11-21 13:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:46.882146731 +0000 UTC m=+161.003679293" watchObservedRunningTime="2025-11-21 13:34:46.882209013 +0000 UTC m=+161.003741565" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.902230 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:46 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:46 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:46 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.902296 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.920286 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.920744 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx7lt\" (UniqueName: \"kubernetes.io/projected/572a2042-cdab-4f41-963e-2af4eb50fa19-kube-api-access-rx7lt\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.920793 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-catalog-content\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.920868 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-utilities\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:46 crc kubenswrapper[4904]: I1121 13:34:46.926407 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.003736 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hltmb"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.004801 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.008023 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.018593 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hltmb"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.021793 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.021855 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx7lt\" (UniqueName: \"kubernetes.io/projected/572a2042-cdab-4f41-963e-2af4eb50fa19-kube-api-access-rx7lt\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.021893 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-catalog-content\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.021931 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-utilities\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.022391 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-utilities\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.022517 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-catalog-content\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.024485 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.024568 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.043557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx7lt\" (UniqueName: \"kubernetes.io/projected/572a2042-cdab-4f41-963e-2af4eb50fa19-kube-api-access-rx7lt\") pod \"certified-operators-jt2rv\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.067488 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g96sr\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.123772 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-catalog-content\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.123828 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2dk\" (UniqueName: \"kubernetes.io/projected/9e74e7f1-ac79-4360-9c5a-27488d4b985d-kube-api-access-lq2dk\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.123997 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-utilities\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.125525 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.213307 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jf8x4"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.214459 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.221718 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jf8x4"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.225040 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-utilities\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.225111 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-catalog-content\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.225129 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq2dk\" (UniqueName: \"kubernetes.io/projected/9e74e7f1-ac79-4360-9c5a-27488d4b985d-kube-api-access-lq2dk\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.226299 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-utilities\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.226531 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-catalog-content\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.249374 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq2dk\" (UniqueName: \"kubernetes.io/projected/9e74e7f1-ac79-4360-9c5a-27488d4b985d-kube-api-access-lq2dk\") pod \"community-operators-hltmb\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.314162 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.322325 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.326437 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr4pv\" (UniqueName: \"kubernetes.io/projected/ad0262dd-3967-43d2-85e5-4409394787d8-kube-api-access-nr4pv\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.326503 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-utilities\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.326524 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-catalog-content\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.340321 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jt2rv"] Nov 21 13:34:47 crc kubenswrapper[4904]: W1121 13:34:47.351465 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572a2042_cdab_4f41_963e_2af4eb50fa19.slice/crio-7d3662c48ee360d610794e9c1ba46e384d409762cc39b9b5fdc2c1e4cf73a9ad WatchSource:0}: Error finding container 7d3662c48ee360d610794e9c1ba46e384d409762cc39b9b5fdc2c1e4cf73a9ad: Status 404 returned error can't find the container with id 7d3662c48ee360d610794e9c1ba46e384d409762cc39b9b5fdc2c1e4cf73a9ad Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.403450 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2zl2t"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.404630 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.421481 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zl2t"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.431960 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr4pv\" (UniqueName: \"kubernetes.io/projected/ad0262dd-3967-43d2-85e5-4409394787d8-kube-api-access-nr4pv\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.432037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-utilities\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.432057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-catalog-content\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.432754 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-catalog-content\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.432823 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-utilities\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.457329 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr4pv\" (UniqueName: \"kubernetes.io/projected/ad0262dd-3967-43d2-85e5-4409394787d8-kube-api-access-nr4pv\") pod \"certified-operators-jf8x4\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.533183 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-utilities\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.533247 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-catalog-content\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.533308 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/6ee0aa5a-10c2-48db-badf-2942d5366d1b-kube-api-access-c7g9z\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.540129 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.568270 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hltmb"] Nov 21 13:34:47 crc kubenswrapper[4904]: W1121 13:34:47.577070 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e74e7f1_ac79_4360_9c5a_27488d4b985d.slice/crio-56400d07ad143cf00daca8488ab652432422101f79b34a2ff2af8e196071418d WatchSource:0}: Error finding container 56400d07ad143cf00daca8488ab652432422101f79b34a2ff2af8e196071418d: Status 404 returned error can't find the container with id 56400d07ad143cf00daca8488ab652432422101f79b34a2ff2af8e196071418d Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.604193 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g96sr"] Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.636219 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/6ee0aa5a-10c2-48db-badf-2942d5366d1b-kube-api-access-c7g9z\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.636318 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-utilities\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.636353 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-catalog-content\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.637021 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-catalog-content\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.637209 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-utilities\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.657465 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/6ee0aa5a-10c2-48db-badf-2942d5366d1b-kube-api-access-c7g9z\") pod \"community-operators-2zl2t\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.720187 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.724563 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9sdsl" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.768188 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.900067 4904 generic.go:334] "Generic (PLEG): container finished" podID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerID="004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e" exitCode=0 Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.900135 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerDied","Data":"004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e"} Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.900163 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerStarted","Data":"56400d07ad143cf00daca8488ab652432422101f79b34a2ff2af8e196071418d"} Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.904902 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:47 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:47 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:47 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.904976 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.905236 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.914132 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" event={"ID":"db301386-8de5-4553-a48d-fd858d4fc6f9","Type":"ContainerStarted","Data":"1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405"} Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.914178 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" event={"ID":"db301386-8de5-4553-a48d-fd858d4fc6f9","Type":"ContainerStarted","Data":"cd741800a8717775e44496673e389b73a334791418337664a5dbc0c8ddace904"} Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.914768 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.944913 4904 generic.go:334] "Generic (PLEG): container finished" podID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerID="a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16" exitCode=0 Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.946029 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2rv" event={"ID":"572a2042-cdab-4f41-963e-2af4eb50fa19","Type":"ContainerDied","Data":"a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16"} Nov 21 13:34:47 crc kubenswrapper[4904]: I1121 13:34:47.946064 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2rv" event={"ID":"572a2042-cdab-4f41-963e-2af4eb50fa19","Type":"ContainerStarted","Data":"7d3662c48ee360d610794e9c1ba46e384d409762cc39b9b5fdc2c1e4cf73a9ad"} Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.039858 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jf8x4"] Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.083813 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" podStartSLOduration=135.083797021 podStartE2EDuration="2m15.083797021s" podCreationTimestamp="2025-11-21 13:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:34:48.081136566 +0000 UTC m=+162.202669118" watchObservedRunningTime="2025-11-21 13:34:48.083797021 +0000 UTC m=+162.205329563" Nov 21 13:34:48 crc kubenswrapper[4904]: W1121 13:34:48.105198 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad0262dd_3967_43d2_85e5_4409394787d8.slice/crio-4d4a21eed98e4150742d3d882cf4ff656669aaf44bd77867027a6c200728e8eb WatchSource:0}: Error finding container 4d4a21eed98e4150742d3d882cf4ff656669aaf44bd77867027a6c200728e8eb: Status 404 returned error can't find the container with id 4d4a21eed98e4150742d3d882cf4ff656669aaf44bd77867027a6c200728e8eb Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.280814 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2zl2t"] Nov 21 13:34:48 crc kubenswrapper[4904]: W1121 13:34:48.294001 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ee0aa5a_10c2_48db_badf_2942d5366d1b.slice/crio-730f0a82e58a275a6b197356e852e10beaefe65240eeb1e993c15634d8559497 WatchSource:0}: Error finding container 730f0a82e58a275a6b197356e852e10beaefe65240eeb1e993c15634d8559497: Status 404 returned error can't find the container with id 730f0a82e58a275a6b197356e852e10beaefe65240eeb1e993c15634d8559497 Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.444879 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-cfhwx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.444941 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cfhwx" podUID="8c6f1a5d-0119-4508-a7cd-5cb0eb522d01" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.445109 4904 patch_prober.go:28] interesting pod/downloads-7954f5f757-cfhwx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.445173 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cfhwx" podUID="8c6f1a5d-0119-4508-a7cd-5cb0eb522d01" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.520567 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.717356 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.717956 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.720429 4904 patch_prober.go:28] interesting pod/console-f9d7485db-g69x5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.720490 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g69x5" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.862949 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.863020 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.871339 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.899738 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.903713 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:48 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:48 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:48 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.903771 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.952800 4904 generic.go:334] "Generic (PLEG): container finished" podID="d19a8a38-94ec-4377-8f90-24f34f5fb547" containerID="acf9d917cece611e3714a2947789ed917d1cab98addad2c33f86b497f248379b" exitCode=0 Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.952856 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" event={"ID":"d19a8a38-94ec-4377-8f90-24f34f5fb547","Type":"ContainerDied","Data":"acf9d917cece611e3714a2947789ed917d1cab98addad2c33f86b497f248379b"} Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.955199 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad0262dd-3967-43d2-85e5-4409394787d8" containerID="3fb77351a8ed6835fd07c8aa162fbd1d93fba0e658e7b2b3e39ad92c53a1dbeb" exitCode=0 Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.955270 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerDied","Data":"3fb77351a8ed6835fd07c8aa162fbd1d93fba0e658e7b2b3e39ad92c53a1dbeb"} Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.955300 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerStarted","Data":"4d4a21eed98e4150742d3d882cf4ff656669aaf44bd77867027a6c200728e8eb"} Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.957721 4904 generic.go:334] "Generic (PLEG): container finished" podID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerID="354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682" exitCode=0 Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.957821 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zl2t" event={"ID":"6ee0aa5a-10c2-48db-badf-2942d5366d1b","Type":"ContainerDied","Data":"354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682"} Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.957897 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zl2t" event={"ID":"6ee0aa5a-10c2-48db-badf-2942d5366d1b","Type":"ContainerStarted","Data":"730f0a82e58a275a6b197356e852e10beaefe65240eeb1e993c15634d8559497"} Nov 21 13:34:48 crc kubenswrapper[4904]: I1121 13:34:48.983473 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-qnqnz" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.013542 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5xkb"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.014806 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.017484 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.019268 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5xkb"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.114433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-catalog-content\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.114597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-utilities\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.114696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbt7z\" (UniqueName: \"kubernetes.io/projected/3de70b5e-de0f-4506-a803-cc14ce3112b2-kube-api-access-tbt7z\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.216525 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-catalog-content\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.216576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-utilities\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.216604 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbt7z\" (UniqueName: \"kubernetes.io/projected/3de70b5e-de0f-4506-a803-cc14ce3112b2-kube-api-access-tbt7z\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.217231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-catalog-content\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.217625 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-utilities\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.242983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbt7z\" (UniqueName: \"kubernetes.io/projected/3de70b5e-de0f-4506-a803-cc14ce3112b2-kube-api-access-tbt7z\") pod \"redhat-marketplace-s5xkb\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.378087 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.381523 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.428227 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v76"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.430568 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.436415 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v76"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.520324 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77w4w\" (UniqueName: \"kubernetes.io/projected/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-kube-api-access-77w4w\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.520744 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-catalog-content\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.520857 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-utilities\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.594531 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.595244 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.595327 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.600485 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.602119 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.621756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77w4w\" (UniqueName: \"kubernetes.io/projected/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-kube-api-access-77w4w\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.621821 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-catalog-content\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.621877 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-utilities\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.622437 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-utilities\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.622679 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-catalog-content\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.640815 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77w4w\" (UniqueName: \"kubernetes.io/projected/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-kube-api-access-77w4w\") pod \"redhat-marketplace-w5v76\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.687086 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5xkb"] Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.722835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7060588a-553a-49b5-b808-1bf2c60e65d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.722933 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7060588a-553a-49b5-b808-1bf2c60e65d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.773995 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.823598 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7060588a-553a-49b5-b808-1bf2c60e65d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.823710 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7060588a-553a-49b5-b808-1bf2c60e65d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.823711 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7060588a-553a-49b5-b808-1bf2c60e65d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.850783 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7060588a-553a-49b5-b808-1bf2c60e65d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.904084 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:49 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:49 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:49 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.904173 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.919867 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.976556 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerStarted","Data":"d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53"} Nov 21 13:34:49 crc kubenswrapper[4904]: I1121 13:34:49.976602 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerStarted","Data":"974f0fcda8f6b2c6e719e0fd4921423adcffdae4cb46f0de4cb1de2c649744ab"} Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.009181 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nlxnw"] Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.010363 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.015449 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.033252 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlxnw"] Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.129558 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmslj\" (UniqueName: \"kubernetes.io/projected/4b3df79f-6e68-4914-ac06-07efe94a329d-kube-api-access-nmslj\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.129743 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-utilities\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.129822 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-catalog-content\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.231369 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmslj\" (UniqueName: \"kubernetes.io/projected/4b3df79f-6e68-4914-ac06-07efe94a329d-kube-api-access-nmslj\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.231775 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-utilities\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.231812 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-catalog-content\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.232664 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-utilities\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.232714 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-catalog-content\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.259978 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmslj\" (UniqueName: \"kubernetes.io/projected/4b3df79f-6e68-4914-ac06-07efe94a329d-kube-api-access-nmslj\") pod \"redhat-operators-nlxnw\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.311005 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.336815 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.336862 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.338531 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v76"] Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.405984 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bt59j"] Nov 21 13:34:50 crc kubenswrapper[4904]: E1121 13:34:50.406299 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19a8a38-94ec-4377-8f90-24f34f5fb547" containerName="collect-profiles" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.406312 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19a8a38-94ec-4377-8f90-24f34f5fb547" containerName="collect-profiles" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.406409 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19a8a38-94ec-4377-8f90-24f34f5fb547" containerName="collect-profiles" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.408846 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.428569 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt59j"] Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.438149 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19a8a38-94ec-4377-8f90-24f34f5fb547-config-volume\") pod \"d19a8a38-94ec-4377-8f90-24f34f5fb547\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.438300 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjtxm\" (UniqueName: \"kubernetes.io/projected/d19a8a38-94ec-4377-8f90-24f34f5fb547-kube-api-access-vjtxm\") pod \"d19a8a38-94ec-4377-8f90-24f34f5fb547\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.438337 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19a8a38-94ec-4377-8f90-24f34f5fb547-secret-volume\") pod \"d19a8a38-94ec-4377-8f90-24f34f5fb547\" (UID: \"d19a8a38-94ec-4377-8f90-24f34f5fb547\") " Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.439791 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19a8a38-94ec-4377-8f90-24f34f5fb547-config-volume" (OuterVolumeSpecName: "config-volume") pod "d19a8a38-94ec-4377-8f90-24f34f5fb547" (UID: "d19a8a38-94ec-4377-8f90-24f34f5fb547"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.443275 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19a8a38-94ec-4377-8f90-24f34f5fb547-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d19a8a38-94ec-4377-8f90-24f34f5fb547" (UID: "d19a8a38-94ec-4377-8f90-24f34f5fb547"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.443328 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19a8a38-94ec-4377-8f90-24f34f5fb547-kube-api-access-vjtxm" (OuterVolumeSpecName: "kube-api-access-vjtxm") pod "d19a8a38-94ec-4377-8f90-24f34f5fb547" (UID: "d19a8a38-94ec-4377-8f90-24f34f5fb547"). InnerVolumeSpecName "kube-api-access-vjtxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.539710 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtgp\" (UniqueName: \"kubernetes.io/projected/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-kube-api-access-9wtgp\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.539770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-utilities\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.540314 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-catalog-content\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.540576 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d19a8a38-94ec-4377-8f90-24f34f5fb547-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.540632 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjtxm\" (UniqueName: \"kubernetes.io/projected/d19a8a38-94ec-4377-8f90-24f34f5fb547-kube-api-access-vjtxm\") on node \"crc\" DevicePath \"\"" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.540644 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d19a8a38-94ec-4377-8f90-24f34f5fb547-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.642819 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wtgp\" (UniqueName: \"kubernetes.io/projected/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-kube-api-access-9wtgp\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.643490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-utilities\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.643642 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-catalog-content\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.644550 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-catalog-content\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.644803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-utilities\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.667551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wtgp\" (UniqueName: \"kubernetes.io/projected/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-kube-api-access-9wtgp\") pod \"redhat-operators-bt59j\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.735395 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nlxnw"] Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.746789 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:34:50 crc kubenswrapper[4904]: W1121 13:34:50.758833 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b3df79f_6e68_4914_ac06_07efe94a329d.slice/crio-a46066dbcfb7633fe19188cb8910dde3661345de9f422d1956b33016d2326afe WatchSource:0}: Error finding container a46066dbcfb7633fe19188cb8910dde3661345de9f422d1956b33016d2326afe: Status 404 returned error can't find the container with id a46066dbcfb7633fe19188cb8910dde3661345de9f422d1956b33016d2326afe Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.905272 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:50 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:50 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:50 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.905332 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:50 crc kubenswrapper[4904]: I1121 13:34:50.992424 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7060588a-553a-49b5-b808-1bf2c60e65d1","Type":"ContainerStarted","Data":"7a88172753daaab0a5dab8ea7a00ad493186bc5a66b597f113457b177879d092"} Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.012900 4904 generic.go:334] "Generic (PLEG): container finished" podID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerID="d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53" exitCode=0 Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.013031 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerDied","Data":"d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53"} Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.015807 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v76" event={"ID":"04e95b5c-a2be-4f48-bf7e-fc6823ea986b","Type":"ContainerStarted","Data":"eaab0eb882dcc73bb64ef004b476271e093269a8343cce55312db84726a9ece3"} Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.019129 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerStarted","Data":"a46066dbcfb7633fe19188cb8910dde3661345de9f422d1956b33016d2326afe"} Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.027535 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" event={"ID":"d19a8a38-94ec-4377-8f90-24f34f5fb547","Type":"ContainerDied","Data":"7ecbf7725fb10ddbfae3d010a6b75efe31a1ef2cb4cfc85f284dc9d11e2970d9"} Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.027589 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ecbf7725fb10ddbfae3d010a6b75efe31a1ef2cb4cfc85f284dc9d11e2970d9" Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.027647 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r" Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.083255 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt59j"] Nov 21 13:34:51 crc kubenswrapper[4904]: W1121 13:34:51.105108 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f58a95_b030_4aa2_8c3b_9c96d3ca29ef.slice/crio-e20fe03eac832e1774bc5d74c18b5829d991654f4bd2622eb6eb7620bf8ef01b WatchSource:0}: Error finding container e20fe03eac832e1774bc5d74c18b5829d991654f4bd2622eb6eb7620bf8ef01b: Status 404 returned error can't find the container with id e20fe03eac832e1774bc5d74c18b5829d991654f4bd2622eb6eb7620bf8ef01b Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.903608 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:51 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:51 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:51 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:51 crc kubenswrapper[4904]: I1121 13:34:51.903973 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.036764 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7060588a-553a-49b5-b808-1bf2c60e65d1","Type":"ContainerStarted","Data":"b48f9dae34709bc9a8e68e630a2cc236fe73674b7851b8934918e636ad23853e"} Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.038761 4904 generic.go:334] "Generic (PLEG): container finished" podID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerID="7a39f6021e08ee431ded12cf4bdf3e0fd3a883562328cbc4db43fd05642b143b" exitCode=0 Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.038854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v76" event={"ID":"04e95b5c-a2be-4f48-bf7e-fc6823ea986b","Type":"ContainerDied","Data":"7a39f6021e08ee431ded12cf4bdf3e0fd3a883562328cbc4db43fd05642b143b"} Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.041441 4904 generic.go:334] "Generic (PLEG): container finished" podID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerID="114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097" exitCode=0 Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.041578 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerDied","Data":"114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097"} Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.045880 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerStarted","Data":"e20fe03eac832e1774bc5d74c18b5829d991654f4bd2622eb6eb7620bf8ef01b"} Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.942106 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:52 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:52 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:52 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.942351 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.954722 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.956050 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.960041 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.960268 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 21 13:34:52 crc kubenswrapper[4904]: I1121 13:34:52.965392 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.064989 4904 generic.go:334] "Generic (PLEG): container finished" podID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerID="6b250e1e9764595c803b74154c3bdbc0a3b5958cd93a2442e2abd240606630e4" exitCode=0 Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.065085 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerDied","Data":"6b250e1e9764595c803b74154c3bdbc0a3b5958cd93a2442e2abd240606630e4"} Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.074864 4904 generic.go:334] "Generic (PLEG): container finished" podID="7060588a-553a-49b5-b808-1bf2c60e65d1" containerID="b48f9dae34709bc9a8e68e630a2cc236fe73674b7851b8934918e636ad23853e" exitCode=0 Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.075371 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7060588a-553a-49b5-b808-1bf2c60e65d1","Type":"ContainerDied","Data":"b48f9dae34709bc9a8e68e630a2cc236fe73674b7851b8934918e636ad23853e"} Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.102791 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.102906 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.204228 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.204347 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.204442 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.224068 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.291516 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.902066 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:53 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:53 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:53 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:53 crc kubenswrapper[4904]: I1121 13:34:53.902140 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:54 crc kubenswrapper[4904]: I1121 13:34:54.453276 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wxjkg" Nov 21 13:34:54 crc kubenswrapper[4904]: I1121 13:34:54.904235 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:54 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:54 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:54 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:54 crc kubenswrapper[4904]: I1121 13:34:54.904303 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:55 crc kubenswrapper[4904]: I1121 13:34:55.556385 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:55 crc kubenswrapper[4904]: I1121 13:34:55.565717 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7c038482-babe-44bf-a8ff-89415347e81f-metrics-certs\") pod \"network-metrics-daemon-mx57c\" (UID: \"7c038482-babe-44bf-a8ff-89415347e81f\") " pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:55 crc kubenswrapper[4904]: I1121 13:34:55.732815 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mx57c" Nov 21 13:34:55 crc kubenswrapper[4904]: I1121 13:34:55.905326 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:55 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:55 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:55 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:55 crc kubenswrapper[4904]: I1121 13:34:55.905427 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:56 crc kubenswrapper[4904]: I1121 13:34:56.903492 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:56 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:56 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:56 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:56 crc kubenswrapper[4904]: I1121 13:34:56.904248 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:57 crc kubenswrapper[4904]: I1121 13:34:57.902420 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:57 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:57 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:57 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:57 crc kubenswrapper[4904]: I1121 13:34:57.902490 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.113459 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.113563 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.453092 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cfhwx" Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.717469 4904 patch_prober.go:28] interesting pod/console-f9d7485db-g69x5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.717586 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-g69x5" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerName="console" probeResult="failure" output="Get \"https://10.217.0.30:8443/health\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.903631 4904 patch_prober.go:28] interesting pod/router-default-5444994796-5tm89 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 21 13:34:58 crc kubenswrapper[4904]: [-]has-synced failed: reason withheld Nov 21 13:34:58 crc kubenswrapper[4904]: [+]process-running ok Nov 21 13:34:58 crc kubenswrapper[4904]: healthz check failed Nov 21 13:34:58 crc kubenswrapper[4904]: I1121 13:34:58.903770 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5tm89" podUID="fb945dc3-fcd0-43b6-a048-82682523848b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.054800 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.120557 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7060588a-553a-49b5-b808-1bf2c60e65d1-kube-api-access\") pod \"7060588a-553a-49b5-b808-1bf2c60e65d1\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.121286 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7060588a-553a-49b5-b808-1bf2c60e65d1-kubelet-dir\") pod \"7060588a-553a-49b5-b808-1bf2c60e65d1\" (UID: \"7060588a-553a-49b5-b808-1bf2c60e65d1\") " Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.121583 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7060588a-553a-49b5-b808-1bf2c60e65d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7060588a-553a-49b5-b808-1bf2c60e65d1" (UID: "7060588a-553a-49b5-b808-1bf2c60e65d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.129068 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7060588a-553a-49b5-b808-1bf2c60e65d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7060588a-553a-49b5-b808-1bf2c60e65d1" (UID: "7060588a-553a-49b5-b808-1bf2c60e65d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.136206 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7060588a-553a-49b5-b808-1bf2c60e65d1","Type":"ContainerDied","Data":"7a88172753daaab0a5dab8ea7a00ad493186bc5a66b597f113457b177879d092"} Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.136274 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a88172753daaab0a5dab8ea7a00ad493186bc5a66b597f113457b177879d092" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.136280 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.222747 4904 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7060588a-553a-49b5-b808-1bf2c60e65d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.222786 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7060588a-553a-49b5-b808-1bf2c60e65d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.903088 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:34:59 crc kubenswrapper[4904]: I1121 13:34:59.905287 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5tm89" Nov 21 13:35:00 crc kubenswrapper[4904]: I1121 13:35:00.656385 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mx57c"] Nov 21 13:35:00 crc kubenswrapper[4904]: I1121 13:35:00.696800 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 21 13:35:05 crc kubenswrapper[4904]: W1121 13:35:05.398579 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod49f04c1e_a7a8_4b6d_b3a7_cdbdbc251b10.slice/crio-896ade716533e8abb1d76fdf99f76bb5508dd963faf6d880adcf40d33de2f3be WatchSource:0}: Error finding container 896ade716533e8abb1d76fdf99f76bb5508dd963faf6d880adcf40d33de2f3be: Status 404 returned error can't find the container with id 896ade716533e8abb1d76fdf99f76bb5508dd963faf6d880adcf40d33de2f3be Nov 21 13:35:06 crc kubenswrapper[4904]: I1121 13:35:06.169533 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10","Type":"ContainerStarted","Data":"896ade716533e8abb1d76fdf99f76bb5508dd963faf6d880adcf40d33de2f3be"} Nov 21 13:35:07 crc kubenswrapper[4904]: I1121 13:35:07.323984 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:35:08 crc kubenswrapper[4904]: I1121 13:35:08.720755 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:35:08 crc kubenswrapper[4904]: I1121 13:35:08.724352 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:35:14 crc kubenswrapper[4904]: I1121 13:35:14.698299 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 21 13:35:15 crc kubenswrapper[4904]: E1121 13:35:15.993782 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 13:35:15 crc kubenswrapper[4904]: E1121 13:35:15.994034 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7g9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2zl2t_openshift-marketplace(6ee0aa5a-10c2-48db-badf-2942d5366d1b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:35:15 crc kubenswrapper[4904]: E1121 13:35:15.995251 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2zl2t" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" Nov 21 13:35:18 crc kubenswrapper[4904]: E1121 13:35:18.851795 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2zl2t" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" Nov 21 13:35:19 crc kubenswrapper[4904]: I1121 13:35:19.269364 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mx57c" event={"ID":"7c038482-babe-44bf-a8ff-89415347e81f","Type":"ContainerStarted","Data":"de25e3a87ce0f97f5a6c7dbd45a98fdc461672b20408d335ca58ea7a6ad28bba"} Nov 21 13:35:19 crc kubenswrapper[4904]: I1121 13:35:19.671347 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rh865" Nov 21 13:35:19 crc kubenswrapper[4904]: E1121 13:35:19.694714 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 21 13:35:19 crc kubenswrapper[4904]: E1121 13:35:19.694863 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rx7lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jt2rv_openshift-marketplace(572a2042-cdab-4f41-963e-2af4eb50fa19): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:35:19 crc kubenswrapper[4904]: E1121 13:35:19.696157 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jt2rv" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" Nov 21 13:35:21 crc kubenswrapper[4904]: E1121 13:35:21.298310 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 21 13:35:21 crc kubenswrapper[4904]: E1121 13:35:21.298488 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nr4pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jf8x4_openshift-marketplace(ad0262dd-3967-43d2-85e5-4409394787d8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:35:21 crc kubenswrapper[4904]: E1121 13:35:21.299792 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jf8x4" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" Nov 21 13:35:21 crc kubenswrapper[4904]: E1121 13:35:21.366483 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jt2rv" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" Nov 21 13:35:22 crc kubenswrapper[4904]: E1121 13:35:22.289315 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jf8x4" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" Nov 21 13:35:22 crc kubenswrapper[4904]: E1121 13:35:22.611568 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 21 13:35:22 crc kubenswrapper[4904]: E1121 13:35:22.611756 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq2dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hltmb_openshift-marketplace(9e74e7f1-ac79-4360-9c5a-27488d4b985d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:35:22 crc kubenswrapper[4904]: E1121 13:35:22.612946 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hltmb" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" Nov 21 13:35:23 crc kubenswrapper[4904]: I1121 13:35:23.295522 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10","Type":"ContainerStarted","Data":"3856f3b432365502f394424a6ef4d93a2f7d0aa0669941f5d0f494f91c4e3c1d"} Nov 21 13:35:23 crc kubenswrapper[4904]: I1121 13:35:23.300794 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mx57c" event={"ID":"7c038482-babe-44bf-a8ff-89415347e81f","Type":"ContainerStarted","Data":"616096c8eb80b65b18eb241aed1b967a3465bbdf4936f7273cd36f44c5208706"} Nov 21 13:35:23 crc kubenswrapper[4904]: E1121 13:35:23.306890 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hltmb" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" Nov 21 13:35:23 crc kubenswrapper[4904]: I1121 13:35:23.325040 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=31.325011262 podStartE2EDuration="31.325011262s" podCreationTimestamp="2025-11-21 13:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:23.315311232 +0000 UTC m=+197.436843784" watchObservedRunningTime="2025-11-21 13:35:23.325011262 +0000 UTC m=+197.446543814" Nov 21 13:35:24 crc kubenswrapper[4904]: I1121 13:35:24.307970 4904 generic.go:334] "Generic (PLEG): container finished" podID="49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10" containerID="3856f3b432365502f394424a6ef4d93a2f7d0aa0669941f5d0f494f91c4e3c1d" exitCode=0 Nov 21 13:35:24 crc kubenswrapper[4904]: I1121 13:35:24.308109 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10","Type":"ContainerDied","Data":"3856f3b432365502f394424a6ef4d93a2f7d0aa0669941f5d0f494f91c4e3c1d"} Nov 21 13:35:24 crc kubenswrapper[4904]: I1121 13:35:24.311643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mx57c" event={"ID":"7c038482-babe-44bf-a8ff-89415347e81f","Type":"ContainerStarted","Data":"0f618b3657e053f083df47abbae59a00a0365a87e0e0d29aeba3bc0d67ad4813"} Nov 21 13:35:24 crc kubenswrapper[4904]: I1121 13:35:24.347513 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mx57c" podStartSLOduration=172.347489235 podStartE2EDuration="2m52.347489235s" podCreationTimestamp="2025-11-21 13:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:35:24.344983514 +0000 UTC m=+198.466516116" watchObservedRunningTime="2025-11-21 13:35:24.347489235 +0000 UTC m=+198.469021797" Nov 21 13:35:24 crc kubenswrapper[4904]: E1121 13:35:24.511298 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 21 13:35:24 crc kubenswrapper[4904]: E1121 13:35:24.511852 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbt7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s5xkb_openshift-marketplace(3de70b5e-de0f-4506-a803-cc14ce3112b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 21 13:35:24 crc kubenswrapper[4904]: E1121 13:35:24.513125 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s5xkb" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" Nov 21 13:35:27 crc kubenswrapper[4904]: E1121 13:35:27.913243 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s5xkb" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" Nov 21 13:35:27 crc kubenswrapper[4904]: I1121 13:35:27.963398 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.048712 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kube-api-access\") pod \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.049520 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kubelet-dir\") pod \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\" (UID: \"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10\") " Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.049602 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10" (UID: "49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.049915 4904 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.056062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10" (UID: "49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.114787 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.114994 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.152161 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.344615 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerStarted","Data":"f61dc44bf700d4371d48e00bec5e0440880694e6a941b0570a006c2329285750"} Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.347746 4904 generic.go:334] "Generic (PLEG): container finished" podID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerID="770c106e735269b420ca07f837d546b7c3971d12a4227883adda8c252fad5332" exitCode=0 Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.347848 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v76" event={"ID":"04e95b5c-a2be-4f48-bf7e-fc6823ea986b","Type":"ContainerDied","Data":"770c106e735269b420ca07f837d546b7c3971d12a4227883adda8c252fad5332"} Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.352877 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10","Type":"ContainerDied","Data":"896ade716533e8abb1d76fdf99f76bb5508dd963faf6d880adcf40d33de2f3be"} Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.352916 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="896ade716533e8abb1d76fdf99f76bb5508dd963faf6d880adcf40d33de2f3be" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.352991 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 21 13:35:28 crc kubenswrapper[4904]: I1121 13:35:28.360856 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerStarted","Data":"89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125"} Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.370008 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v76" event={"ID":"04e95b5c-a2be-4f48-bf7e-fc6823ea986b","Type":"ContainerStarted","Data":"5b5566e7fd40cc74f7e9de0ff41bc6a8b22c20889bdf7f7abbe4457c02b29242"} Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.374264 4904 generic.go:334] "Generic (PLEG): container finished" podID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerID="89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125" exitCode=0 Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.374354 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerDied","Data":"89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125"} Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.381642 4904 generic.go:334] "Generic (PLEG): container finished" podID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerID="f61dc44bf700d4371d48e00bec5e0440880694e6a941b0570a006c2329285750" exitCode=0 Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.381741 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerDied","Data":"f61dc44bf700d4371d48e00bec5e0440880694e6a941b0570a006c2329285750"} Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.389965 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5v76" podStartSLOduration=3.623405224 podStartE2EDuration="40.389941061s" podCreationTimestamp="2025-11-21 13:34:49 +0000 UTC" firstStartedPulling="2025-11-21 13:34:52.042319232 +0000 UTC m=+166.163851774" lastFinishedPulling="2025-11-21 13:35:28.808855059 +0000 UTC m=+202.930387611" observedRunningTime="2025-11-21 13:35:29.389849509 +0000 UTC m=+203.511382111" watchObservedRunningTime="2025-11-21 13:35:29.389941061 +0000 UTC m=+203.511473603" Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.775583 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:35:29 crc kubenswrapper[4904]: I1121 13:35:29.776529 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:35:30 crc kubenswrapper[4904]: I1121 13:35:30.389384 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerStarted","Data":"71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3"} Nov 21 13:35:30 crc kubenswrapper[4904]: I1121 13:35:30.392368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerStarted","Data":"eda13fda98334fa47e16a2a5e86c96dce42802510120a4f5b8513a2150bd9739"} Nov 21 13:35:30 crc kubenswrapper[4904]: I1121 13:35:30.411434 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nlxnw" podStartSLOduration=10.29551596 podStartE2EDuration="41.411415779s" podCreationTimestamp="2025-11-21 13:34:49 +0000 UTC" firstStartedPulling="2025-11-21 13:34:59.012637154 +0000 UTC m=+173.134169706" lastFinishedPulling="2025-11-21 13:35:30.128536973 +0000 UTC m=+204.250069525" observedRunningTime="2025-11-21 13:35:30.409237166 +0000 UTC m=+204.530769738" watchObservedRunningTime="2025-11-21 13:35:30.411415779 +0000 UTC m=+204.532948331" Nov 21 13:35:30 crc kubenswrapper[4904]: I1121 13:35:30.432703 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bt59j" podStartSLOduration=9.471047197 podStartE2EDuration="40.432631645s" podCreationTimestamp="2025-11-21 13:34:50 +0000 UTC" firstStartedPulling="2025-11-21 13:34:59.012598973 +0000 UTC m=+173.134131525" lastFinishedPulling="2025-11-21 13:35:29.974183421 +0000 UTC m=+204.095715973" observedRunningTime="2025-11-21 13:35:30.431355244 +0000 UTC m=+204.552887806" watchObservedRunningTime="2025-11-21 13:35:30.432631645 +0000 UTC m=+204.554164197" Nov 21 13:35:30 crc kubenswrapper[4904]: I1121 13:35:30.747063 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:35:30 crc kubenswrapper[4904]: I1121 13:35:30.747182 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:35:31 crc kubenswrapper[4904]: I1121 13:35:31.006785 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-w5v76" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="registry-server" probeResult="failure" output=< Nov 21 13:35:31 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:35:31 crc kubenswrapper[4904]: > Nov 21 13:35:31 crc kubenswrapper[4904]: I1121 13:35:31.796412 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bt59j" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="registry-server" probeResult="failure" output=< Nov 21 13:35:31 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:35:31 crc kubenswrapper[4904]: > Nov 21 13:35:32 crc kubenswrapper[4904]: I1121 13:35:32.509530 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7bm6t"] Nov 21 13:35:36 crc kubenswrapper[4904]: I1121 13:35:36.427184 4904 generic.go:334] "Generic (PLEG): container finished" podID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerID="375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b" exitCode=0 Nov 21 13:35:36 crc kubenswrapper[4904]: I1121 13:35:36.427275 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2rv" event={"ID":"572a2042-cdab-4f41-963e-2af4eb50fa19","Type":"ContainerDied","Data":"375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b"} Nov 21 13:35:37 crc kubenswrapper[4904]: I1121 13:35:37.456008 4904 generic.go:334] "Generic (PLEG): container finished" podID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerID="250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455" exitCode=0 Nov 21 13:35:37 crc kubenswrapper[4904]: I1121 13:35:37.456109 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zl2t" event={"ID":"6ee0aa5a-10c2-48db-badf-2942d5366d1b","Type":"ContainerDied","Data":"250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455"} Nov 21 13:35:37 crc kubenswrapper[4904]: I1121 13:35:37.472289 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2rv" event={"ID":"572a2042-cdab-4f41-963e-2af4eb50fa19","Type":"ContainerStarted","Data":"0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a"} Nov 21 13:35:37 crc kubenswrapper[4904]: I1121 13:35:37.485568 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerStarted","Data":"0e74b7ddd31684dc0fc372df625d6d3905ca3fe8825706fa8cf8d7c0a60e45ea"} Nov 21 13:35:37 crc kubenswrapper[4904]: I1121 13:35:37.512746 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jt2rv" podStartSLOduration=2.578855623 podStartE2EDuration="51.512727416s" podCreationTimestamp="2025-11-21 13:34:46 +0000 UTC" firstStartedPulling="2025-11-21 13:34:47.947282601 +0000 UTC m=+162.068815153" lastFinishedPulling="2025-11-21 13:35:36.881154394 +0000 UTC m=+211.002686946" observedRunningTime="2025-11-21 13:35:37.510756097 +0000 UTC m=+211.632288669" watchObservedRunningTime="2025-11-21 13:35:37.512727416 +0000 UTC m=+211.634259968" Nov 21 13:35:38 crc kubenswrapper[4904]: I1121 13:35:38.494001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerStarted","Data":"1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c"} Nov 21 13:35:38 crc kubenswrapper[4904]: I1121 13:35:38.499826 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad0262dd-3967-43d2-85e5-4409394787d8" containerID="0e74b7ddd31684dc0fc372df625d6d3905ca3fe8825706fa8cf8d7c0a60e45ea" exitCode=0 Nov 21 13:35:38 crc kubenswrapper[4904]: I1121 13:35:38.499882 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerDied","Data":"0e74b7ddd31684dc0fc372df625d6d3905ca3fe8825706fa8cf8d7c0a60e45ea"} Nov 21 13:35:38 crc kubenswrapper[4904]: I1121 13:35:38.502340 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zl2t" event={"ID":"6ee0aa5a-10c2-48db-badf-2942d5366d1b","Type":"ContainerStarted","Data":"2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed"} Nov 21 13:35:38 crc kubenswrapper[4904]: I1121 13:35:38.571764 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2zl2t" podStartSLOduration=2.360170584 podStartE2EDuration="51.571745585s" podCreationTimestamp="2025-11-21 13:34:47 +0000 UTC" firstStartedPulling="2025-11-21 13:34:48.976345787 +0000 UTC m=+163.097878339" lastFinishedPulling="2025-11-21 13:35:38.187920788 +0000 UTC m=+212.309453340" observedRunningTime="2025-11-21 13:35:38.56911591 +0000 UTC m=+212.690648462" watchObservedRunningTime="2025-11-21 13:35:38.571745585 +0000 UTC m=+212.693278137" Nov 21 13:35:39 crc kubenswrapper[4904]: I1121 13:35:39.515105 4904 generic.go:334] "Generic (PLEG): container finished" podID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerID="1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c" exitCode=0 Nov 21 13:35:39 crc kubenswrapper[4904]: I1121 13:35:39.515339 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerDied","Data":"1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c"} Nov 21 13:35:39 crc kubenswrapper[4904]: I1121 13:35:39.520359 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerStarted","Data":"b480ed0acac6b758c3509f46390f13b6c95e2c152500933d17250ea1647f5f43"} Nov 21 13:35:39 crc kubenswrapper[4904]: I1121 13:35:39.824803 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:35:39 crc kubenswrapper[4904]: I1121 13:35:39.865815 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.337925 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.338015 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.383467 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.560299 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jf8x4" podStartSLOduration=3.639991581 podStartE2EDuration="53.560279934s" podCreationTimestamp="2025-11-21 13:34:47 +0000 UTC" firstStartedPulling="2025-11-21 13:34:48.960396503 +0000 UTC m=+163.081929045" lastFinishedPulling="2025-11-21 13:35:38.880684846 +0000 UTC m=+213.002217398" observedRunningTime="2025-11-21 13:35:40.557063605 +0000 UTC m=+214.678596167" watchObservedRunningTime="2025-11-21 13:35:40.560279934 +0000 UTC m=+214.681812496" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.571013 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.792721 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:35:40 crc kubenswrapper[4904]: I1121 13:35:40.837683 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:35:43 crc kubenswrapper[4904]: I1121 13:35:43.747959 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v76"] Nov 21 13:35:43 crc kubenswrapper[4904]: I1121 13:35:43.748430 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5v76" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="registry-server" containerID="cri-o://5b5566e7fd40cc74f7e9de0ff41bc6a8b22c20889bdf7f7abbe4457c02b29242" gracePeriod=2 Nov 21 13:35:43 crc kubenswrapper[4904]: I1121 13:35:43.946471 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bt59j"] Nov 21 13:35:43 crc kubenswrapper[4904]: I1121 13:35:43.947289 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bt59j" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="registry-server" containerID="cri-o://eda13fda98334fa47e16a2a5e86c96dce42802510120a4f5b8513a2150bd9739" gracePeriod=2 Nov 21 13:35:45 crc kubenswrapper[4904]: I1121 13:35:45.559344 4904 generic.go:334] "Generic (PLEG): container finished" podID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerID="5b5566e7fd40cc74f7e9de0ff41bc6a8b22c20889bdf7f7abbe4457c02b29242" exitCode=0 Nov 21 13:35:45 crc kubenswrapper[4904]: I1121 13:35:45.559429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v76" event={"ID":"04e95b5c-a2be-4f48-bf7e-fc6823ea986b","Type":"ContainerDied","Data":"5b5566e7fd40cc74f7e9de0ff41bc6a8b22c20889bdf7f7abbe4457c02b29242"} Nov 21 13:35:45 crc kubenswrapper[4904]: I1121 13:35:45.562104 4904 generic.go:334] "Generic (PLEG): container finished" podID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerID="eda13fda98334fa47e16a2a5e86c96dce42802510120a4f5b8513a2150bd9739" exitCode=0 Nov 21 13:35:45 crc kubenswrapper[4904]: I1121 13:35:45.562131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerDied","Data":"eda13fda98334fa47e16a2a5e86c96dce42802510120a4f5b8513a2150bd9739"} Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.181534 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.204216 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-catalog-content\") pod \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.204446 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77w4w\" (UniqueName: \"kubernetes.io/projected/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-kube-api-access-77w4w\") pod \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.204483 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-utilities\") pod \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\" (UID: \"04e95b5c-a2be-4f48-bf7e-fc6823ea986b\") " Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.206579 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-utilities" (OuterVolumeSpecName: "utilities") pod "04e95b5c-a2be-4f48-bf7e-fc6823ea986b" (UID: "04e95b5c-a2be-4f48-bf7e-fc6823ea986b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.212866 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-kube-api-access-77w4w" (OuterVolumeSpecName: "kube-api-access-77w4w") pod "04e95b5c-a2be-4f48-bf7e-fc6823ea986b" (UID: "04e95b5c-a2be-4f48-bf7e-fc6823ea986b"). InnerVolumeSpecName "kube-api-access-77w4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.245056 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04e95b5c-a2be-4f48-bf7e-fc6823ea986b" (UID: "04e95b5c-a2be-4f48-bf7e-fc6823ea986b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.266488 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.307675 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-utilities\") pod \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.307897 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wtgp\" (UniqueName: \"kubernetes.io/projected/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-kube-api-access-9wtgp\") pod \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.307936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-catalog-content\") pod \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\" (UID: \"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef\") " Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.308277 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77w4w\" (UniqueName: \"kubernetes.io/projected/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-kube-api-access-77w4w\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.308306 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.308323 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04e95b5c-a2be-4f48-bf7e-fc6823ea986b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.309853 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-utilities" (OuterVolumeSpecName: "utilities") pod "27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" (UID: "27f58a95-b030-4aa2-8c3b-9c96d3ca29ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.314407 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-kube-api-access-9wtgp" (OuterVolumeSpecName: "kube-api-access-9wtgp") pod "27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" (UID: "27f58a95-b030-4aa2-8c3b-9c96d3ca29ef"). InnerVolumeSpecName "kube-api-access-9wtgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.410405 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.410501 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wtgp\" (UniqueName: \"kubernetes.io/projected/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-kube-api-access-9wtgp\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.412256 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" (UID: "27f58a95-b030-4aa2-8c3b-9c96d3ca29ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.511247 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.576040 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerStarted","Data":"e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a"} Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.593083 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5v76" event={"ID":"04e95b5c-a2be-4f48-bf7e-fc6823ea986b","Type":"ContainerDied","Data":"eaab0eb882dcc73bb64ef004b476271e093269a8343cce55312db84726a9ece3"} Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.593139 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5v76" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.593147 4904 scope.go:117] "RemoveContainer" containerID="5b5566e7fd40cc74f7e9de0ff41bc6a8b22c20889bdf7f7abbe4457c02b29242" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.602548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt59j" event={"ID":"27f58a95-b030-4aa2-8c3b-9c96d3ca29ef","Type":"ContainerDied","Data":"e20fe03eac832e1774bc5d74c18b5829d991654f4bd2622eb6eb7620bf8ef01b"} Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.602703 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt59j" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.610813 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hltmb" podStartSLOduration=2.365912819 podStartE2EDuration="1m0.610794846s" podCreationTimestamp="2025-11-21 13:34:46 +0000 UTC" firstStartedPulling="2025-11-21 13:34:47.904897641 +0000 UTC m=+162.026430193" lastFinishedPulling="2025-11-21 13:35:46.149779658 +0000 UTC m=+220.271312220" observedRunningTime="2025-11-21 13:35:46.608345495 +0000 UTC m=+220.729878057" watchObservedRunningTime="2025-11-21 13:35:46.610794846 +0000 UTC m=+220.732327398" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.626469 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v76"] Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.633920 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5v76"] Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.650378 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bt59j"] Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.658867 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bt59j"] Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.902825 4904 scope.go:117] "RemoveContainer" containerID="770c106e735269b420ca07f837d546b7c3971d12a4227883adda8c252fad5332" Nov 21 13:35:46 crc kubenswrapper[4904]: I1121 13:35:46.963272 4904 scope.go:117] "RemoveContainer" containerID="7a39f6021e08ee431ded12cf4bdf3e0fd3a883562328cbc4db43fd05642b143b" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.125901 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.126231 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.169860 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.322687 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.322743 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.426575 4904 scope.go:117] "RemoveContainer" containerID="eda13fda98334fa47e16a2a5e86c96dce42802510120a4f5b8513a2150bd9739" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.440886 4904 scope.go:117] "RemoveContainer" containerID="f61dc44bf700d4371d48e00bec5e0440880694e6a941b0570a006c2329285750" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.455755 4904 scope.go:117] "RemoveContainer" containerID="6b250e1e9764595c803b74154c3bdbc0a3b5958cd93a2442e2abd240606630e4" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.541174 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.541228 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.584827 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.654642 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.663630 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.769790 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.770024 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:35:47 crc kubenswrapper[4904]: I1121 13:35:47.820536 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:35:48 crc kubenswrapper[4904]: I1121 13:35:48.361975 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hltmb" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="registry-server" probeResult="failure" output=< Nov 21 13:35:48 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:35:48 crc kubenswrapper[4904]: > Nov 21 13:35:48 crc kubenswrapper[4904]: I1121 13:35:48.539282 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" path="/var/lib/kubelet/pods/04e95b5c-a2be-4f48-bf7e-fc6823ea986b/volumes" Nov 21 13:35:48 crc kubenswrapper[4904]: I1121 13:35:48.540056 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" path="/var/lib/kubelet/pods/27f58a95-b030-4aa2-8c3b-9c96d3ca29ef/volumes" Nov 21 13:35:48 crc kubenswrapper[4904]: I1121 13:35:48.626065 4904 generic.go:334] "Generic (PLEG): container finished" podID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerID="afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa" exitCode=0 Nov 21 13:35:48 crc kubenswrapper[4904]: I1121 13:35:48.626114 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerDied","Data":"afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa"} Nov 21 13:35:48 crc kubenswrapper[4904]: I1121 13:35:48.682844 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:35:49 crc kubenswrapper[4904]: I1121 13:35:49.633442 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerStarted","Data":"8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63"} Nov 21 13:35:49 crc kubenswrapper[4904]: I1121 13:35:49.652744 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5xkb" podStartSLOduration=2.540638579 podStartE2EDuration="1m1.652726705s" podCreationTimestamp="2025-11-21 13:34:48 +0000 UTC" firstStartedPulling="2025-11-21 13:34:49.982217609 +0000 UTC m=+164.103750161" lastFinishedPulling="2025-11-21 13:35:49.094305735 +0000 UTC m=+223.215838287" observedRunningTime="2025-11-21 13:35:49.651748481 +0000 UTC m=+223.773281043" watchObservedRunningTime="2025-11-21 13:35:49.652726705 +0000 UTC m=+223.774259247" Nov 21 13:35:50 crc kubenswrapper[4904]: I1121 13:35:50.146553 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zl2t"] Nov 21 13:35:50 crc kubenswrapper[4904]: I1121 13:35:50.751474 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jf8x4"] Nov 21 13:35:50 crc kubenswrapper[4904]: I1121 13:35:50.751925 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jf8x4" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="registry-server" containerID="cri-o://b480ed0acac6b758c3509f46390f13b6c95e2c152500933d17250ea1647f5f43" gracePeriod=2 Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.647631 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad0262dd-3967-43d2-85e5-4409394787d8" containerID="b480ed0acac6b758c3509f46390f13b6c95e2c152500933d17250ea1647f5f43" exitCode=0 Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.647914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerDied","Data":"b480ed0acac6b758c3509f46390f13b6c95e2c152500933d17250ea1647f5f43"} Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.648015 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2zl2t" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="registry-server" containerID="cri-o://2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed" gracePeriod=2 Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.795112 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.885471 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-catalog-content\") pod \"ad0262dd-3967-43d2-85e5-4409394787d8\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.885526 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr4pv\" (UniqueName: \"kubernetes.io/projected/ad0262dd-3967-43d2-85e5-4409394787d8-kube-api-access-nr4pv\") pod \"ad0262dd-3967-43d2-85e5-4409394787d8\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.885579 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-utilities\") pod \"ad0262dd-3967-43d2-85e5-4409394787d8\" (UID: \"ad0262dd-3967-43d2-85e5-4409394787d8\") " Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.886546 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-utilities" (OuterVolumeSpecName: "utilities") pod "ad0262dd-3967-43d2-85e5-4409394787d8" (UID: "ad0262dd-3967-43d2-85e5-4409394787d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.892755 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0262dd-3967-43d2-85e5-4409394787d8-kube-api-access-nr4pv" (OuterVolumeSpecName: "kube-api-access-nr4pv") pod "ad0262dd-3967-43d2-85e5-4409394787d8" (UID: "ad0262dd-3967-43d2-85e5-4409394787d8"). InnerVolumeSpecName "kube-api-access-nr4pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.937062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad0262dd-3967-43d2-85e5-4409394787d8" (UID: "ad0262dd-3967-43d2-85e5-4409394787d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.987202 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.987237 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr4pv\" (UniqueName: \"kubernetes.io/projected/ad0262dd-3967-43d2-85e5-4409394787d8-kube-api-access-nr4pv\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:51 crc kubenswrapper[4904]: I1121 13:35:51.987250 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad0262dd-3967-43d2-85e5-4409394787d8-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.004451 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.088965 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-utilities\") pod \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.089086 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/6ee0aa5a-10c2-48db-badf-2942d5366d1b-kube-api-access-c7g9z\") pod \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.089236 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-catalog-content\") pod \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\" (UID: \"6ee0aa5a-10c2-48db-badf-2942d5366d1b\") " Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.090555 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-utilities" (OuterVolumeSpecName: "utilities") pod "6ee0aa5a-10c2-48db-badf-2942d5366d1b" (UID: "6ee0aa5a-10c2-48db-badf-2942d5366d1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.092343 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee0aa5a-10c2-48db-badf-2942d5366d1b-kube-api-access-c7g9z" (OuterVolumeSpecName: "kube-api-access-c7g9z") pod "6ee0aa5a-10c2-48db-badf-2942d5366d1b" (UID: "6ee0aa5a-10c2-48db-badf-2942d5366d1b"). InnerVolumeSpecName "kube-api-access-c7g9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.165170 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ee0aa5a-10c2-48db-badf-2942d5366d1b" (UID: "6ee0aa5a-10c2-48db-badf-2942d5366d1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.191038 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.191087 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ee0aa5a-10c2-48db-badf-2942d5366d1b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.191098 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7g9z\" (UniqueName: \"kubernetes.io/projected/6ee0aa5a-10c2-48db-badf-2942d5366d1b-kube-api-access-c7g9z\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.656891 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8x4" event={"ID":"ad0262dd-3967-43d2-85e5-4409394787d8","Type":"ContainerDied","Data":"4d4a21eed98e4150742d3d882cf4ff656669aaf44bd77867027a6c200728e8eb"} Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.656931 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8x4" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.656983 4904 scope.go:117] "RemoveContainer" containerID="b480ed0acac6b758c3509f46390f13b6c95e2c152500933d17250ea1647f5f43" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.661261 4904 generic.go:334] "Generic (PLEG): container finished" podID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerID="2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed" exitCode=0 Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.661311 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zl2t" event={"ID":"6ee0aa5a-10c2-48db-badf-2942d5366d1b","Type":"ContainerDied","Data":"2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed"} Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.661367 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2zl2t" event={"ID":"6ee0aa5a-10c2-48db-badf-2942d5366d1b","Type":"ContainerDied","Data":"730f0a82e58a275a6b197356e852e10beaefe65240eeb1e993c15634d8559497"} Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.661400 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2zl2t" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.679257 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2zl2t"] Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.682624 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2zl2t"] Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.687062 4904 scope.go:117] "RemoveContainer" containerID="0e74b7ddd31684dc0fc372df625d6d3905ca3fe8825706fa8cf8d7c0a60e45ea" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.691359 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jf8x4"] Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.693900 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jf8x4"] Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.704490 4904 scope.go:117] "RemoveContainer" containerID="3fb77351a8ed6835fd07c8aa162fbd1d93fba0e658e7b2b3e39ad92c53a1dbeb" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.722631 4904 scope.go:117] "RemoveContainer" containerID="2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.745207 4904 scope.go:117] "RemoveContainer" containerID="250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.762609 4904 scope.go:117] "RemoveContainer" containerID="354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.780707 4904 scope.go:117] "RemoveContainer" containerID="2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed" Nov 21 13:35:52 crc kubenswrapper[4904]: E1121 13:35:52.781384 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed\": container with ID starting with 2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed not found: ID does not exist" containerID="2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.781476 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed"} err="failed to get container status \"2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed\": rpc error: code = NotFound desc = could not find container \"2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed\": container with ID starting with 2843df693d32590fa9bd552babc8887467887d82de76f1e99781c0785c1cc1ed not found: ID does not exist" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.781641 4904 scope.go:117] "RemoveContainer" containerID="250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455" Nov 21 13:35:52 crc kubenswrapper[4904]: E1121 13:35:52.782186 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455\": container with ID starting with 250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455 not found: ID does not exist" containerID="250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.782223 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455"} err="failed to get container status \"250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455\": rpc error: code = NotFound desc = could not find container \"250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455\": container with ID starting with 250fbd8d62c45b773e1ef67b1450ddc7be597490c5a5c521a502b29af0451455 not found: ID does not exist" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.782247 4904 scope.go:117] "RemoveContainer" containerID="354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682" Nov 21 13:35:52 crc kubenswrapper[4904]: E1121 13:35:52.782642 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682\": container with ID starting with 354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682 not found: ID does not exist" containerID="354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682" Nov 21 13:35:52 crc kubenswrapper[4904]: I1121 13:35:52.782679 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682"} err="failed to get container status \"354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682\": rpc error: code = NotFound desc = could not find container \"354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682\": container with ID starting with 354885d966d5dc098c5562a5d4ee59ee2a176cf1bd60b6b134fe8238379d3682 not found: ID does not exist" Nov 21 13:35:54 crc kubenswrapper[4904]: I1121 13:35:54.519639 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" path="/var/lib/kubelet/pods/6ee0aa5a-10c2-48db-badf-2942d5366d1b/volumes" Nov 21 13:35:54 crc kubenswrapper[4904]: I1121 13:35:54.521499 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" path="/var/lib/kubelet/pods/ad0262dd-3967-43d2-85e5-4409394787d8/volumes" Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.364313 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.426156 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.547742 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" containerName="oauth-openshift" containerID="cri-o://1f33b89ef8f1391622ce75eb2b2952fa8eefec5601a2e24f079ad58520fe39de" gracePeriod=15 Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.695118 4904 generic.go:334] "Generic (PLEG): container finished" podID="df1572fa-33e4-4828-b32d-9721b2df142d" containerID="1f33b89ef8f1391622ce75eb2b2952fa8eefec5601a2e24f079ad58520fe39de" exitCode=0 Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.695215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" event={"ID":"df1572fa-33e4-4828-b32d-9721b2df142d","Type":"ContainerDied","Data":"1f33b89ef8f1391622ce75eb2b2952fa8eefec5601a2e24f079ad58520fe39de"} Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.728226 4904 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-7bm6t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Nov 21 13:35:57 crc kubenswrapper[4904]: I1121 13:35:57.728304 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.010012 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.113467 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.113574 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.113644 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.114621 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.114706 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942" gracePeriod=600 Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176562 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-cliconfig\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176624 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-session\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176691 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-login\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176736 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffcsm\" (UniqueName: \"kubernetes.io/projected/df1572fa-33e4-4828-b32d-9721b2df142d-kube-api-access-ffcsm\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176787 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-service-ca\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176849 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-router-certs\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176911 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-provider-selection\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176935 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-ocp-branding-template\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.176973 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-trusted-ca-bundle\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177454 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-serving-cert\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177507 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-idp-0-file-data\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177529 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-audit-policies\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177556 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-error\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177572 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df1572fa-33e4-4828-b32d-9721b2df142d-audit-dir\") pod \"df1572fa-33e4-4828-b32d-9721b2df142d\" (UID: \"df1572fa-33e4-4828-b32d-9721b2df142d\") " Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177621 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177673 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df1572fa-33e4-4828-b32d-9721b2df142d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177678 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177904 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177922 4904 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/df1572fa-33e4-4828-b32d-9721b2df142d-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.177935 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.178482 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.178783 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.191988 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.192438 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.192806 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.193071 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.193348 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.193399 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.193686 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.193763 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.194295 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1572fa-33e4-4828-b32d-9721b2df142d-kube-api-access-ffcsm" (OuterVolumeSpecName: "kube-api-access-ffcsm") pod "df1572fa-33e4-4828-b32d-9721b2df142d" (UID: "df1572fa-33e4-4828-b32d-9721b2df142d"). InnerVolumeSpecName "kube-api-access-ffcsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.278934 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.278965 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.278976 4904 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.278986 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.278998 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.279007 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.279017 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffcsm\" (UniqueName: \"kubernetes.io/projected/df1572fa-33e4-4828-b32d-9721b2df142d-kube-api-access-ffcsm\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.279026 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.279037 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.279049 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.279060 4904 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/df1572fa-33e4-4828-b32d-9721b2df142d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.707307 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942" exitCode=0 Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.707401 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942"} Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.708065 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"1eccabc69ccc2202e8628ad2146eaa449cb59e8e720cbf26d256e45f66a151f7"} Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.709751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" event={"ID":"df1572fa-33e4-4828-b32d-9721b2df142d","Type":"ContainerDied","Data":"014f2e045780d1a94dcc90cf78ca0a1c649c80ee4771be1d1e19a381748c8bbe"} Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.709799 4904 scope.go:117] "RemoveContainer" containerID="1f33b89ef8f1391622ce75eb2b2952fa8eefec5601a2e24f079ad58520fe39de" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.709907 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-7bm6t" Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.740718 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7bm6t"] Nov 21 13:35:58 crc kubenswrapper[4904]: I1121 13:35:58.744085 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-7bm6t"] Nov 21 13:35:59 crc kubenswrapper[4904]: I1121 13:35:59.378361 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:35:59 crc kubenswrapper[4904]: I1121 13:35:59.378461 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:35:59 crc kubenswrapper[4904]: I1121 13:35:59.429777 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:35:59 crc kubenswrapper[4904]: I1121 13:35:59.784360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.532561 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" path="/var/lib/kubelet/pods/df1572fa-33e4-4828-b32d-9721b2df142d/volumes" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954019 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9"] Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954248 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954264 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954277 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954286 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954300 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7060588a-553a-49b5-b808-1bf2c60e65d1" containerName="pruner" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954310 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7060588a-553a-49b5-b808-1bf2c60e65d1" containerName="pruner" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954324 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10" containerName="pruner" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954332 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10" containerName="pruner" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954344 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" containerName="oauth-openshift" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954353 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" containerName="oauth-openshift" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954362 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954370 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954383 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954392 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954402 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954410 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954422 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954433 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954446 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954456 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954468 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954477 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954490 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954498 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="extract-utilities" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954506 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954514 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954525 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954533 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: E1121 13:36:00.954547 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954555 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="extract-content" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954692 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1572fa-33e4-4828-b32d-9721b2df142d" containerName="oauth-openshift" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954709 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f04c1e-a7a8-4b6d-b3a7-cdbdbc251b10" containerName="pruner" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954719 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f58a95-b030-4aa2-8c3b-9c96d3ca29ef" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954734 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee0aa5a-10c2-48db-badf-2942d5366d1b" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954745 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0262dd-3967-43d2-85e5-4409394787d8" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954755 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e95b5c-a2be-4f48-bf7e-fc6823ea986b" containerName="registry-server" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.954768 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7060588a-553a-49b5-b808-1bf2c60e65d1" containerName="pruner" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.955248 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.958999 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.959087 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.960495 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.962315 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.962541 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.962732 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.962861 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.962984 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.964788 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.964938 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.965815 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.965975 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.971309 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9"] Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.972083 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.979806 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 21 13:36:00 crc kubenswrapper[4904]: I1121 13:36:00.982562 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120270 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120385 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120425 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d20d54d-66e6-480b-a426-6bc75e776049-audit-dir\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120477 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-audit-policies\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120501 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-session\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120524 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120616 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754bd\" (UniqueName: \"kubernetes.io/projected/9d20d54d-66e6-480b-a426-6bc75e776049-kube-api-access-754bd\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-error\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120739 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120769 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.120836 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-login\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222046 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222133 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d20d54d-66e6-480b-a426-6bc75e776049-audit-dir\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222168 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-audit-policies\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222196 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-session\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222225 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222259 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754bd\" (UniqueName: \"kubernetes.io/projected/9d20d54d-66e6-480b-a426-6bc75e776049-kube-api-access-754bd\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.222287 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d20d54d-66e6-480b-a426-6bc75e776049-audit-dir\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.223550 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-audit-policies\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.223924 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-error\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.223972 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.224017 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.224056 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.224090 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.224130 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-login\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.224354 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.224423 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.225637 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-service-ca\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.233689 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.233690 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-session\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.233792 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.234144 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-error\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.234974 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.236157 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.240110 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-login\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.243538 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.249081 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-system-router-certs\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.250504 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9d20d54d-66e6-480b-a426-6bc75e776049-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.250979 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754bd\" (UniqueName: \"kubernetes.io/projected/9d20d54d-66e6-480b-a426-6bc75e776049-kube-api-access-754bd\") pod \"oauth-openshift-6bbf4c9fdf-vxxg9\" (UID: \"9d20d54d-66e6-480b-a426-6bc75e776049\") " pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.276159 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.500942 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9"] Nov 21 13:36:01 crc kubenswrapper[4904]: W1121 13:36:01.503747 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d20d54d_66e6_480b_a426_6bc75e776049.slice/crio-2e1dacb73b156baad5e427282f37bb51f5db09c36738b8088ca366e17c2f6251 WatchSource:0}: Error finding container 2e1dacb73b156baad5e427282f37bb51f5db09c36738b8088ca366e17c2f6251: Status 404 returned error can't find the container with id 2e1dacb73b156baad5e427282f37bb51f5db09c36738b8088ca366e17c2f6251 Nov 21 13:36:01 crc kubenswrapper[4904]: I1121 13:36:01.735131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" event={"ID":"9d20d54d-66e6-480b-a426-6bc75e776049","Type":"ContainerStarted","Data":"2e1dacb73b156baad5e427282f37bb51f5db09c36738b8088ca366e17c2f6251"} Nov 21 13:36:02 crc kubenswrapper[4904]: I1121 13:36:02.743476 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" event={"ID":"9d20d54d-66e6-480b-a426-6bc75e776049","Type":"ContainerStarted","Data":"fd77ecbb940894b947d002fd6a8fcd1da81a4ebd1e4dbb1a0b5e599917b6434c"} Nov 21 13:36:02 crc kubenswrapper[4904]: I1121 13:36:02.744303 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:02 crc kubenswrapper[4904]: I1121 13:36:02.752100 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" Nov 21 13:36:02 crc kubenswrapper[4904]: I1121 13:36:02.773303 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6bbf4c9fdf-vxxg9" podStartSLOduration=30.773280079 podStartE2EDuration="30.773280079s" podCreationTimestamp="2025-11-21 13:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:36:02.772293584 +0000 UTC m=+236.893826126" watchObservedRunningTime="2025-11-21 13:36:02.773280079 +0000 UTC m=+236.894812631" Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.935407 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt2rv"] Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.937128 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jt2rv" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="registry-server" containerID="cri-o://0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" gracePeriod=30 Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.945853 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hltmb"] Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.946810 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hltmb" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="registry-server" containerID="cri-o://e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" gracePeriod=30 Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.972796 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bff4w"] Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.973143 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerName="marketplace-operator" containerID="cri-o://416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c" gracePeriod=30 Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.979321 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5xkb"] Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.979788 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5xkb" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="registry-server" containerID="cri-o://8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63" gracePeriod=30 Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.982796 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ww7zw"] Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.988991 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.989755 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlxnw"] Nov 21 13:36:36 crc kubenswrapper[4904]: I1121 13:36:36.991830 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nlxnw" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="registry-server" containerID="cri-o://71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3" gracePeriod=30 Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.002953 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ww7zw"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.127296 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a is running failed: container process not found" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.128086 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a is running failed: container process not found" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.128723 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a is running failed: container process not found" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.128761 4904 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-jt2rv" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="registry-server" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.172121 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd6f0ea3-c491-4f01-a129-e2e0119808b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.172206 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd6f0ea3-c491-4f01-a129-e2e0119808b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.172235 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfg2h\" (UniqueName: \"kubernetes.io/projected/dd6f0ea3-c491-4f01-a129-e2e0119808b7-kube-api-access-jfg2h\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.273893 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd6f0ea3-c491-4f01-a129-e2e0119808b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.273976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd6f0ea3-c491-4f01-a129-e2e0119808b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.274001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfg2h\" (UniqueName: \"kubernetes.io/projected/dd6f0ea3-c491-4f01-a129-e2e0119808b7-kube-api-access-jfg2h\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.275926 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd6f0ea3-c491-4f01-a129-e2e0119808b7-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.283488 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd6f0ea3-c491-4f01-a129-e2e0119808b7-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.297265 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfg2h\" (UniqueName: \"kubernetes.io/projected/dd6f0ea3-c491-4f01-a129-e2e0119808b7-kube-api-access-jfg2h\") pod \"marketplace-operator-79b997595-ww7zw\" (UID: \"dd6f0ea3-c491-4f01-a129-e2e0119808b7\") " pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.326827 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a is running failed: container process not found" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.327453 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a is running failed: container process not found" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.327954 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a is running failed: container process not found" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:36:37 crc kubenswrapper[4904]: E1121 13:36:37.327999 4904 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hltmb" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="registry-server" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.429964 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.433840 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.438125 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.444934 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.460823 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.471603 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580690 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-catalog-content\") pod \"572a2042-cdab-4f41-963e-2af4eb50fa19\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580767 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq2dk\" (UniqueName: \"kubernetes.io/projected/9e74e7f1-ac79-4360-9c5a-27488d4b985d-kube-api-access-lq2dk\") pod \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580813 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-trusted-ca\") pod \"60d041e7-e6d2-441c-8b41-e908873c0d41\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580838 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbt7z\" (UniqueName: \"kubernetes.io/projected/3de70b5e-de0f-4506-a803-cc14ce3112b2-kube-api-access-tbt7z\") pod \"3de70b5e-de0f-4506-a803-cc14ce3112b2\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580887 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-utilities\") pod \"572a2042-cdab-4f41-963e-2af4eb50fa19\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580915 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmslj\" (UniqueName: \"kubernetes.io/projected/4b3df79f-6e68-4914-ac06-07efe94a329d-kube-api-access-nmslj\") pod \"4b3df79f-6e68-4914-ac06-07efe94a329d\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-utilities\") pod \"4b3df79f-6e68-4914-ac06-07efe94a329d\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580962 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-catalog-content\") pod \"3de70b5e-de0f-4506-a803-cc14ce3112b2\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.580990 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-catalog-content\") pod \"4b3df79f-6e68-4914-ac06-07efe94a329d\" (UID: \"4b3df79f-6e68-4914-ac06-07efe94a329d\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581030 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-catalog-content\") pod \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581055 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-operator-metrics\") pod \"60d041e7-e6d2-441c-8b41-e908873c0d41\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581087 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-utilities\") pod \"3de70b5e-de0f-4506-a803-cc14ce3112b2\" (UID: \"3de70b5e-de0f-4506-a803-cc14ce3112b2\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581112 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx7lt\" (UniqueName: \"kubernetes.io/projected/572a2042-cdab-4f41-963e-2af4eb50fa19-kube-api-access-rx7lt\") pod \"572a2042-cdab-4f41-963e-2af4eb50fa19\" (UID: \"572a2042-cdab-4f41-963e-2af4eb50fa19\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581164 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6n7j\" (UniqueName: \"kubernetes.io/projected/60d041e7-e6d2-441c-8b41-e908873c0d41-kube-api-access-b6n7j\") pod \"60d041e7-e6d2-441c-8b41-e908873c0d41\" (UID: \"60d041e7-e6d2-441c-8b41-e908873c0d41\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581185 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-utilities\") pod \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\" (UID: \"9e74e7f1-ac79-4360-9c5a-27488d4b985d\") " Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.581919 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-utilities" (OuterVolumeSpecName: "utilities") pod "572a2042-cdab-4f41-963e-2af4eb50fa19" (UID: "572a2042-cdab-4f41-963e-2af4eb50fa19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.582077 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-utilities" (OuterVolumeSpecName: "utilities") pod "4b3df79f-6e68-4914-ac06-07efe94a329d" (UID: "4b3df79f-6e68-4914-ac06-07efe94a329d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.583929 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "60d041e7-e6d2-441c-8b41-e908873c0d41" (UID: "60d041e7-e6d2-441c-8b41-e908873c0d41"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.584654 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-utilities" (OuterVolumeSpecName: "utilities") pod "3de70b5e-de0f-4506-a803-cc14ce3112b2" (UID: "3de70b5e-de0f-4506-a803-cc14ce3112b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.587269 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-utilities" (OuterVolumeSpecName: "utilities") pod "9e74e7f1-ac79-4360-9c5a-27488d4b985d" (UID: "9e74e7f1-ac79-4360-9c5a-27488d4b985d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.589191 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572a2042-cdab-4f41-963e-2af4eb50fa19-kube-api-access-rx7lt" (OuterVolumeSpecName: "kube-api-access-rx7lt") pod "572a2042-cdab-4f41-963e-2af4eb50fa19" (UID: "572a2042-cdab-4f41-963e-2af4eb50fa19"). InnerVolumeSpecName "kube-api-access-rx7lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.590855 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "60d041e7-e6d2-441c-8b41-e908873c0d41" (UID: "60d041e7-e6d2-441c-8b41-e908873c0d41"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.593402 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60d041e7-e6d2-441c-8b41-e908873c0d41-kube-api-access-b6n7j" (OuterVolumeSpecName: "kube-api-access-b6n7j") pod "60d041e7-e6d2-441c-8b41-e908873c0d41" (UID: "60d041e7-e6d2-441c-8b41-e908873c0d41"). InnerVolumeSpecName "kube-api-access-b6n7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.594868 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de70b5e-de0f-4506-a803-cc14ce3112b2-kube-api-access-tbt7z" (OuterVolumeSpecName: "kube-api-access-tbt7z") pod "3de70b5e-de0f-4506-a803-cc14ce3112b2" (UID: "3de70b5e-de0f-4506-a803-cc14ce3112b2"). InnerVolumeSpecName "kube-api-access-tbt7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.595975 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3df79f-6e68-4914-ac06-07efe94a329d-kube-api-access-nmslj" (OuterVolumeSpecName: "kube-api-access-nmslj") pod "4b3df79f-6e68-4914-ac06-07efe94a329d" (UID: "4b3df79f-6e68-4914-ac06-07efe94a329d"). InnerVolumeSpecName "kube-api-access-nmslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.596122 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e74e7f1-ac79-4360-9c5a-27488d4b985d-kube-api-access-lq2dk" (OuterVolumeSpecName: "kube-api-access-lq2dk") pod "9e74e7f1-ac79-4360-9c5a-27488d4b985d" (UID: "9e74e7f1-ac79-4360-9c5a-27488d4b985d"). InnerVolumeSpecName "kube-api-access-lq2dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.612459 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3de70b5e-de0f-4506-a803-cc14ce3112b2" (UID: "3de70b5e-de0f-4506-a803-cc14ce3112b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.638485 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e74e7f1-ac79-4360-9c5a-27488d4b985d" (UID: "9e74e7f1-ac79-4360-9c5a-27488d4b985d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.647140 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ww7zw"] Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.656714 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "572a2042-cdab-4f41-963e-2af4eb50fa19" (UID: "572a2042-cdab-4f41-963e-2af4eb50fa19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682503 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682534 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq2dk\" (UniqueName: \"kubernetes.io/projected/9e74e7f1-ac79-4360-9c5a-27488d4b985d-kube-api-access-lq2dk\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682546 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682555 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbt7z\" (UniqueName: \"kubernetes.io/projected/3de70b5e-de0f-4506-a803-cc14ce3112b2-kube-api-access-tbt7z\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682564 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/572a2042-cdab-4f41-963e-2af4eb50fa19-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682574 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmslj\" (UniqueName: \"kubernetes.io/projected/4b3df79f-6e68-4914-ac06-07efe94a329d-kube-api-access-nmslj\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682583 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682591 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682599 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682610 4904 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/60d041e7-e6d2-441c-8b41-e908873c0d41-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682618 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3de70b5e-de0f-4506-a803-cc14ce3112b2-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682628 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx7lt\" (UniqueName: \"kubernetes.io/projected/572a2042-cdab-4f41-963e-2af4eb50fa19-kube-api-access-rx7lt\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682636 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6n7j\" (UniqueName: \"kubernetes.io/projected/60d041e7-e6d2-441c-8b41-e908873c0d41-kube-api-access-b6n7j\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.682645 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e74e7f1-ac79-4360-9c5a-27488d4b985d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.685752 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b3df79f-6e68-4914-ac06-07efe94a329d" (UID: "4b3df79f-6e68-4914-ac06-07efe94a329d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.783452 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b3df79f-6e68-4914-ac06-07efe94a329d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.966866 4904 generic.go:334] "Generic (PLEG): container finished" podID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" exitCode=0 Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.966903 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2rv" event={"ID":"572a2042-cdab-4f41-963e-2af4eb50fa19","Type":"ContainerDied","Data":"0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.966956 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jt2rv" event={"ID":"572a2042-cdab-4f41-963e-2af4eb50fa19","Type":"ContainerDied","Data":"7d3662c48ee360d610794e9c1ba46e384d409762cc39b9b5fdc2c1e4cf73a9ad"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.966981 4904 scope.go:117] "RemoveContainer" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.968335 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jt2rv" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.968881 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" event={"ID":"dd6f0ea3-c491-4f01-a129-e2e0119808b7","Type":"ContainerStarted","Data":"d0ad6518b10a2a70815c4a675285804af468cb17b28e22951fd73e097d17f516"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.968909 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" event={"ID":"dd6f0ea3-c491-4f01-a129-e2e0119808b7","Type":"ContainerStarted","Data":"39a5b7fe31bf6b1a503187cb7e6c5195521a32d564476741a8df354c2e65b96f"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.969132 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.973506 4904 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ww7zw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.973558 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" podUID="dd6f0ea3-c491-4f01-a129-e2e0119808b7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.975877 4904 generic.go:334] "Generic (PLEG): container finished" podID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerID="8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63" exitCode=0 Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.975960 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerDied","Data":"8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.975971 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5xkb" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.975998 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5xkb" event={"ID":"3de70b5e-de0f-4506-a803-cc14ce3112b2","Type":"ContainerDied","Data":"974f0fcda8f6b2c6e719e0fd4921423adcffdae4cb46f0de4cb1de2c649744ab"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.978434 4904 generic.go:334] "Generic (PLEG): container finished" podID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" exitCode=0 Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.978510 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerDied","Data":"e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.978530 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hltmb" event={"ID":"9e74e7f1-ac79-4360-9c5a-27488d4b985d","Type":"ContainerDied","Data":"56400d07ad143cf00daca8488ab652432422101f79b34a2ff2af8e196071418d"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.978603 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hltmb" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.986259 4904 scope.go:117] "RemoveContainer" containerID="375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.986433 4904 generic.go:334] "Generic (PLEG): container finished" podID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerID="416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c" exitCode=0 Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.986552 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.986763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" event={"ID":"60d041e7-e6d2-441c-8b41-e908873c0d41","Type":"ContainerDied","Data":"416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.986890 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bff4w" event={"ID":"60d041e7-e6d2-441c-8b41-e908873c0d41","Type":"ContainerDied","Data":"b7843149988f4def1c3052aaabcd8a8477dce10028c8e80e3ce94a682f256b32"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.991252 4904 generic.go:334] "Generic (PLEG): container finished" podID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerID="71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3" exitCode=0 Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.991298 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerDied","Data":"71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.991324 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nlxnw" event={"ID":"4b3df79f-6e68-4914-ac06-07efe94a329d","Type":"ContainerDied","Data":"a46066dbcfb7633fe19188cb8910dde3661345de9f422d1956b33016d2326afe"} Nov 21 13:36:37 crc kubenswrapper[4904]: I1121 13:36:37.991548 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nlxnw" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.010572 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" podStartSLOduration=2.010546727 podStartE2EDuration="2.010546727s" podCreationTimestamp="2025-11-21 13:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:36:38.005941218 +0000 UTC m=+272.127473780" watchObservedRunningTime="2025-11-21 13:36:38.010546727 +0000 UTC m=+272.132079279" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.012182 4904 scope.go:117] "RemoveContainer" containerID="a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.025645 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jt2rv"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.046998 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jt2rv"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.047167 4904 scope.go:117] "RemoveContainer" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.048277 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a\": container with ID starting with 0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a not found: ID does not exist" containerID="0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.048332 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a"} err="failed to get container status \"0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a\": rpc error: code = NotFound desc = could not find container \"0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a\": container with ID starting with 0920c9a5a493f5808db81623b97867779ba27ca2163595f7ff9dcd19f361e43a not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.048358 4904 scope.go:117] "RemoveContainer" containerID="375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.048652 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b\": container with ID starting with 375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b not found: ID does not exist" containerID="375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.048690 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b"} err="failed to get container status \"375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b\": rpc error: code = NotFound desc = could not find container \"375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b\": container with ID starting with 375d917449380c1011bec021113fed5ac4ef0b385f62d72e29af3f1f72651d5b not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.048704 4904 scope.go:117] "RemoveContainer" containerID="a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.048912 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16\": container with ID starting with a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16 not found: ID does not exist" containerID="a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.048934 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16"} err="failed to get container status \"a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16\": rpc error: code = NotFound desc = could not find container \"a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16\": container with ID starting with a6a02040551a2f3674ad613482fe468f6050f1104118b8dc42b9e84f235aea16 not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.048950 4904 scope.go:117] "RemoveContainer" containerID="8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.079473 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5xkb"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.085279 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5xkb"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.090836 4904 scope.go:117] "RemoveContainer" containerID="afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.091643 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hltmb"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.109496 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hltmb"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.115190 4904 scope.go:117] "RemoveContainer" containerID="d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.120025 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bff4w"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.125108 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bff4w"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.130732 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nlxnw"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.132739 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nlxnw"] Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.133553 4904 scope.go:117] "RemoveContainer" containerID="8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.135008 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63\": container with ID starting with 8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63 not found: ID does not exist" containerID="8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.135046 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63"} err="failed to get container status \"8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63\": rpc error: code = NotFound desc = could not find container \"8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63\": container with ID starting with 8fef5e342960b134322d5926d0de727fcd384368f63fcff6dded6fed66eaeb63 not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.135071 4904 scope.go:117] "RemoveContainer" containerID="afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.135562 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa\": container with ID starting with afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa not found: ID does not exist" containerID="afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.135610 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa"} err="failed to get container status \"afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa\": rpc error: code = NotFound desc = could not find container \"afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa\": container with ID starting with afbdead38cdd2062e5d2e994ce8c6d7471f6a491d0de96108c2f0b2050518caa not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.135679 4904 scope.go:117] "RemoveContainer" containerID="d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.136682 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53\": container with ID starting with d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53 not found: ID does not exist" containerID="d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.136708 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53"} err="failed to get container status \"d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53\": rpc error: code = NotFound desc = could not find container \"d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53\": container with ID starting with d70f8f1d48b03f320eb2cf77c2d8e94484c8deb963c84e1f2eeff5852e38bf53 not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.136722 4904 scope.go:117] "RemoveContainer" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.153866 4904 scope.go:117] "RemoveContainer" containerID="1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.170283 4904 scope.go:117] "RemoveContainer" containerID="004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.194980 4904 scope.go:117] "RemoveContainer" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.195738 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a\": container with ID starting with e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a not found: ID does not exist" containerID="e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.195779 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a"} err="failed to get container status \"e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a\": rpc error: code = NotFound desc = could not find container \"e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a\": container with ID starting with e48afeeee5ba3b1bb30b735d145256ea09ae4af6f216b89fb3e49481f24fee1a not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.195805 4904 scope.go:117] "RemoveContainer" containerID="1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.196500 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c\": container with ID starting with 1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c not found: ID does not exist" containerID="1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.196524 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c"} err="failed to get container status \"1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c\": rpc error: code = NotFound desc = could not find container \"1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c\": container with ID starting with 1b367aeea6ca280082f80e1b458317aa38092f3d6a1258e7a2dd0650d305ba9c not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.196561 4904 scope.go:117] "RemoveContainer" containerID="004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.196999 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e\": container with ID starting with 004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e not found: ID does not exist" containerID="004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.197029 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e"} err="failed to get container status \"004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e\": rpc error: code = NotFound desc = could not find container \"004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e\": container with ID starting with 004cfa0f444531d10e006c4230b4cb6365e1dfbaa42d94e5a6ef9cdd03f8476e not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.197051 4904 scope.go:117] "RemoveContainer" containerID="416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.217774 4904 scope.go:117] "RemoveContainer" containerID="416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.218849 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c\": container with ID starting with 416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c not found: ID does not exist" containerID="416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.218900 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c"} err="failed to get container status \"416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c\": rpc error: code = NotFound desc = could not find container \"416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c\": container with ID starting with 416cde7c371c9fde73d73689a963c1da7effbc59a4d28b85720f28fa3d597f8c not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.218932 4904 scope.go:117] "RemoveContainer" containerID="71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.231763 4904 scope.go:117] "RemoveContainer" containerID="89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.245231 4904 scope.go:117] "RemoveContainer" containerID="114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.257262 4904 scope.go:117] "RemoveContainer" containerID="71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.257759 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3\": container with ID starting with 71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3 not found: ID does not exist" containerID="71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.257810 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3"} err="failed to get container status \"71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3\": rpc error: code = NotFound desc = could not find container \"71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3\": container with ID starting with 71e31085ae75a9de88a99f0a3f704cb38d981bca190f8134002b6f5badc523d3 not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.257843 4904 scope.go:117] "RemoveContainer" containerID="89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.258469 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125\": container with ID starting with 89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125 not found: ID does not exist" containerID="89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.258512 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125"} err="failed to get container status \"89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125\": rpc error: code = NotFound desc = could not find container \"89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125\": container with ID starting with 89de4b98a3f2475842b3e8662a97e612eb847eaa9ad24cb75d257d196ea64125 not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.258538 4904 scope.go:117] "RemoveContainer" containerID="114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097" Nov 21 13:36:38 crc kubenswrapper[4904]: E1121 13:36:38.259010 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097\": container with ID starting with 114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097 not found: ID does not exist" containerID="114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.259067 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097"} err="failed to get container status \"114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097\": rpc error: code = NotFound desc = could not find container \"114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097\": container with ID starting with 114f39dea38b27865d87ae704a7cde887039faca5b119e3afc65830a4f1fa097 not found: ID does not exist" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.518454 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" path="/var/lib/kubelet/pods/3de70b5e-de0f-4506-a803-cc14ce3112b2/volumes" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.519315 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" path="/var/lib/kubelet/pods/4b3df79f-6e68-4914-ac06-07efe94a329d/volumes" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.519974 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" path="/var/lib/kubelet/pods/572a2042-cdab-4f41-963e-2af4eb50fa19/volumes" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.521061 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" path="/var/lib/kubelet/pods/60d041e7-e6d2-441c-8b41-e908873c0d41/volumes" Nov 21 13:36:38 crc kubenswrapper[4904]: I1121 13:36:38.521556 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" path="/var/lib/kubelet/pods/9e74e7f1-ac79-4360-9c5a-27488d4b985d/volumes" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.003280 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ww7zw" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.154919 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nw24d"] Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155105 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155117 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155129 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155138 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155152 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155157 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155166 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155171 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155179 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155188 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155198 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155205 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155215 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155222 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155230 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155236 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="extract-utilities" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155243 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155249 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155257 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155263 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155270 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerName="marketplace-operator" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155276 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerName="marketplace-operator" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155285 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155290 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: E1121 13:36:39.155300 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155306 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="extract-content" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155380 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e74e7f1-ac79-4360-9c5a-27488d4b985d" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155394 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="60d041e7-e6d2-441c-8b41-e908873c0d41" containerName="marketplace-operator" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155401 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de70b5e-de0f-4506-a803-cc14ce3112b2" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155409 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3df79f-6e68-4914-ac06-07efe94a329d" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.155417 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="572a2042-cdab-4f41-963e-2af4eb50fa19" containerName="registry-server" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.156062 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.157965 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.169821 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw24d"] Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.315695 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bb5b26-345d-4800-aabe-95a66a08ac79-utilities\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.315791 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbmr\" (UniqueName: \"kubernetes.io/projected/80bb5b26-345d-4800-aabe-95a66a08ac79-kube-api-access-wzbmr\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.315836 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bb5b26-345d-4800-aabe-95a66a08ac79-catalog-content\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.357088 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7gch5"] Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.358255 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.361120 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.370467 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7gch5"] Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.416844 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bb5b26-345d-4800-aabe-95a66a08ac79-utilities\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.416902 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbmr\" (UniqueName: \"kubernetes.io/projected/80bb5b26-345d-4800-aabe-95a66a08ac79-kube-api-access-wzbmr\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.416922 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bb5b26-345d-4800-aabe-95a66a08ac79-catalog-content\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.417384 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80bb5b26-345d-4800-aabe-95a66a08ac79-utilities\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.417441 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80bb5b26-345d-4800-aabe-95a66a08ac79-catalog-content\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.436726 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbmr\" (UniqueName: \"kubernetes.io/projected/80bb5b26-345d-4800-aabe-95a66a08ac79-kube-api-access-wzbmr\") pod \"redhat-marketplace-nw24d\" (UID: \"80bb5b26-345d-4800-aabe-95a66a08ac79\") " pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.471122 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.518023 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa838181-d22a-4b07-b1f1-e7cd7b728745-utilities\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.518115 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fg5k\" (UniqueName: \"kubernetes.io/projected/aa838181-d22a-4b07-b1f1-e7cd7b728745-kube-api-access-9fg5k\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.518181 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa838181-d22a-4b07-b1f1-e7cd7b728745-catalog-content\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.619610 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa838181-d22a-4b07-b1f1-e7cd7b728745-catalog-content\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.620169 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa838181-d22a-4b07-b1f1-e7cd7b728745-utilities\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.620218 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fg5k\" (UniqueName: \"kubernetes.io/projected/aa838181-d22a-4b07-b1f1-e7cd7b728745-kube-api-access-9fg5k\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.620279 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa838181-d22a-4b07-b1f1-e7cd7b728745-catalog-content\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.620879 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa838181-d22a-4b07-b1f1-e7cd7b728745-utilities\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.640705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fg5k\" (UniqueName: \"kubernetes.io/projected/aa838181-d22a-4b07-b1f1-e7cd7b728745-kube-api-access-9fg5k\") pod \"redhat-operators-7gch5\" (UID: \"aa838181-d22a-4b07-b1f1-e7cd7b728745\") " pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.681605 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nw24d"] Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.683985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:39 crc kubenswrapper[4904]: I1121 13:36:39.934494 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7gch5"] Nov 21 13:36:39 crc kubenswrapper[4904]: W1121 13:36:39.998589 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa838181_d22a_4b07_b1f1_e7cd7b728745.slice/crio-0a876bce888cc0785c0fd67576da3ba1df22b346ba110cc7257fc638eb2cfd78 WatchSource:0}: Error finding container 0a876bce888cc0785c0fd67576da3ba1df22b346ba110cc7257fc638eb2cfd78: Status 404 returned error can't find the container with id 0a876bce888cc0785c0fd67576da3ba1df22b346ba110cc7257fc638eb2cfd78 Nov 21 13:36:40 crc kubenswrapper[4904]: I1121 13:36:40.009381 4904 generic.go:334] "Generic (PLEG): container finished" podID="80bb5b26-345d-4800-aabe-95a66a08ac79" containerID="c52966a62f37e0400832e57d532740cf6009228326374ed9e79f410202601a3f" exitCode=0 Nov 21 13:36:40 crc kubenswrapper[4904]: I1121 13:36:40.009472 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw24d" event={"ID":"80bb5b26-345d-4800-aabe-95a66a08ac79","Type":"ContainerDied","Data":"c52966a62f37e0400832e57d532740cf6009228326374ed9e79f410202601a3f"} Nov 21 13:36:40 crc kubenswrapper[4904]: I1121 13:36:40.010892 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw24d" event={"ID":"80bb5b26-345d-4800-aabe-95a66a08ac79","Type":"ContainerStarted","Data":"2e6ced09d789af9085402f6a232d8b6f5ad465389ef9510c3239023544c3a79e"} Nov 21 13:36:40 crc kubenswrapper[4904]: I1121 13:36:40.012725 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gch5" event={"ID":"aa838181-d22a-4b07-b1f1-e7cd7b728745","Type":"ContainerStarted","Data":"0a876bce888cc0785c0fd67576da3ba1df22b346ba110cc7257fc638eb2cfd78"} Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.020969 4904 generic.go:334] "Generic (PLEG): container finished" podID="aa838181-d22a-4b07-b1f1-e7cd7b728745" containerID="8103fd7d9253085d9a7f359f173ebe40e8763c884e53ac45ea3a65ec2a7f6011" exitCode=0 Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.021100 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gch5" event={"ID":"aa838181-d22a-4b07-b1f1-e7cd7b728745","Type":"ContainerDied","Data":"8103fd7d9253085d9a7f359f173ebe40e8763c884e53ac45ea3a65ec2a7f6011"} Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.023893 4904 generic.go:334] "Generic (PLEG): container finished" podID="80bb5b26-345d-4800-aabe-95a66a08ac79" containerID="0b9f7983dc28fbef93730e3c4145df572310e4c4f863da53e2a1fe1dada114c6" exitCode=0 Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.023939 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw24d" event={"ID":"80bb5b26-345d-4800-aabe-95a66a08ac79","Type":"ContainerDied","Data":"0b9f7983dc28fbef93730e3c4145df572310e4c4f863da53e2a1fe1dada114c6"} Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.557592 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b7jgw"] Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.559720 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.561702 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.569024 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7jgw"] Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.658769 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55d1622-65de-445c-9699-33f122796f2e-catalog-content\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.658881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55d1622-65de-445c-9699-33f122796f2e-utilities\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.658927 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvpn\" (UniqueName: \"kubernetes.io/projected/d55d1622-65de-445c-9699-33f122796f2e-kube-api-access-kvvpn\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.759872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55d1622-65de-445c-9699-33f122796f2e-utilities\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.759942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvpn\" (UniqueName: \"kubernetes.io/projected/d55d1622-65de-445c-9699-33f122796f2e-kube-api-access-kvvpn\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.759974 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55d1622-65de-445c-9699-33f122796f2e-catalog-content\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.760874 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55d1622-65de-445c-9699-33f122796f2e-utilities\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.760907 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55d1622-65de-445c-9699-33f122796f2e-catalog-content\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.762241 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77hfr"] Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.763236 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.766083 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.782078 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvpn\" (UniqueName: \"kubernetes.io/projected/d55d1622-65de-445c-9699-33f122796f2e-kube-api-access-kvvpn\") pod \"certified-operators-b7jgw\" (UID: \"d55d1622-65de-445c-9699-33f122796f2e\") " pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.789121 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77hfr"] Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.882107 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.962938 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-catalog-content\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.963443 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-utilities\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:41 crc kubenswrapper[4904]: I1121 13:36:41.963507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw22t\" (UniqueName: \"kubernetes.io/projected/13c247bc-336f-4ad1-ad29-e860a1f22730-kube-api-access-zw22t\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.043616 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gch5" event={"ID":"aa838181-d22a-4b07-b1f1-e7cd7b728745","Type":"ContainerStarted","Data":"1c923fdc66b0c605c7581e4b0f5ebc2c9204c6ae395db4cf0e9d657f8e3691a9"} Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.052522 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nw24d" event={"ID":"80bb5b26-345d-4800-aabe-95a66a08ac79","Type":"ContainerStarted","Data":"939069f66ea1fee05195084ca42b864aef60bddb774fb3d5d9b0d921346d3b37"} Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.065462 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-catalog-content\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.065511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-utilities\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.065567 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw22t\" (UniqueName: \"kubernetes.io/projected/13c247bc-336f-4ad1-ad29-e860a1f22730-kube-api-access-zw22t\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.066486 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-catalog-content\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.066807 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-utilities\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.086957 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw22t\" (UniqueName: \"kubernetes.io/projected/13c247bc-336f-4ad1-ad29-e860a1f22730-kube-api-access-zw22t\") pod \"community-operators-77hfr\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.093645 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nw24d" podStartSLOduration=1.675743011 podStartE2EDuration="3.093623595s" podCreationTimestamp="2025-11-21 13:36:39 +0000 UTC" firstStartedPulling="2025-11-21 13:36:40.01161469 +0000 UTC m=+274.133147242" lastFinishedPulling="2025-11-21 13:36:41.429495274 +0000 UTC m=+275.551027826" observedRunningTime="2025-11-21 13:36:42.086270017 +0000 UTC m=+276.207802569" watchObservedRunningTime="2025-11-21 13:36:42.093623595 +0000 UTC m=+276.215156137" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.111565 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b7jgw"] Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.379887 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:42 crc kubenswrapper[4904]: I1121 13:36:42.614129 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77hfr"] Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.062419 4904 generic.go:334] "Generic (PLEG): container finished" podID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerID="904e394c9f254c40ed303b7f35fa748c2f9e6beaef7f008dca78aa7e40a931fd" exitCode=0 Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.062907 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerDied","Data":"904e394c9f254c40ed303b7f35fa748c2f9e6beaef7f008dca78aa7e40a931fd"} Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.062939 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerStarted","Data":"4cb707cf4471cd0449564eff3904506a8012582e6daf9e4651df470b54f6f322"} Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.068910 4904 generic.go:334] "Generic (PLEG): container finished" podID="aa838181-d22a-4b07-b1f1-e7cd7b728745" containerID="1c923fdc66b0c605c7581e4b0f5ebc2c9204c6ae395db4cf0e9d657f8e3691a9" exitCode=0 Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.068964 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gch5" event={"ID":"aa838181-d22a-4b07-b1f1-e7cd7b728745","Type":"ContainerDied","Data":"1c923fdc66b0c605c7581e4b0f5ebc2c9204c6ae395db4cf0e9d657f8e3691a9"} Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.072035 4904 generic.go:334] "Generic (PLEG): container finished" podID="d55d1622-65de-445c-9699-33f122796f2e" containerID="f46d3ab96287c348ee18f42f1b66b95c217f81a127b3c0554cd209a57c9b7108" exitCode=0 Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.072920 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7jgw" event={"ID":"d55d1622-65de-445c-9699-33f122796f2e","Type":"ContainerDied","Data":"f46d3ab96287c348ee18f42f1b66b95c217f81a127b3c0554cd209a57c9b7108"} Nov 21 13:36:43 crc kubenswrapper[4904]: I1121 13:36:43.072944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7jgw" event={"ID":"d55d1622-65de-445c-9699-33f122796f2e","Type":"ContainerStarted","Data":"295043fc8ff93c80914061b711e6b9a843131dc269cad6e8ab536101edc015e6"} Nov 21 13:36:44 crc kubenswrapper[4904]: I1121 13:36:44.083867 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7gch5" event={"ID":"aa838181-d22a-4b07-b1f1-e7cd7b728745","Type":"ContainerStarted","Data":"15acb49c6bccf6214efdce81ed6ac683aa6f809176833dc8c57b5e4668e5a336"} Nov 21 13:36:44 crc kubenswrapper[4904]: I1121 13:36:44.090828 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7jgw" event={"ID":"d55d1622-65de-445c-9699-33f122796f2e","Type":"ContainerStarted","Data":"d137dbdb0ed2a89920e018c0e33cde6477c2eb42e459fab1c046a179f9d7c15b"} Nov 21 13:36:44 crc kubenswrapper[4904]: I1121 13:36:44.093461 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerStarted","Data":"a1eba7148395120f4c569645c976e716fe316f79a0ee7855de1ebc8deca178a1"} Nov 21 13:36:44 crc kubenswrapper[4904]: I1121 13:36:44.107021 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7gch5" podStartSLOduration=2.675730368 podStartE2EDuration="5.106999353s" podCreationTimestamp="2025-11-21 13:36:39 +0000 UTC" firstStartedPulling="2025-11-21 13:36:41.025268697 +0000 UTC m=+275.146801289" lastFinishedPulling="2025-11-21 13:36:43.456537722 +0000 UTC m=+277.578070274" observedRunningTime="2025-11-21 13:36:44.104226222 +0000 UTC m=+278.225758774" watchObservedRunningTime="2025-11-21 13:36:44.106999353 +0000 UTC m=+278.228531905" Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.100541 4904 generic.go:334] "Generic (PLEG): container finished" podID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerID="a1eba7148395120f4c569645c976e716fe316f79a0ee7855de1ebc8deca178a1" exitCode=0 Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.100720 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerDied","Data":"a1eba7148395120f4c569645c976e716fe316f79a0ee7855de1ebc8deca178a1"} Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.100914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerStarted","Data":"20517aae05fcf42d36ba3b9c7996cce9a7f498dbe23d6173f073e179c1d02da2"} Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.104718 4904 generic.go:334] "Generic (PLEG): container finished" podID="d55d1622-65de-445c-9699-33f122796f2e" containerID="d137dbdb0ed2a89920e018c0e33cde6477c2eb42e459fab1c046a179f9d7c15b" exitCode=0 Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.105226 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7jgw" event={"ID":"d55d1622-65de-445c-9699-33f122796f2e","Type":"ContainerDied","Data":"d137dbdb0ed2a89920e018c0e33cde6477c2eb42e459fab1c046a179f9d7c15b"} Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.105259 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b7jgw" event={"ID":"d55d1622-65de-445c-9699-33f122796f2e","Type":"ContainerStarted","Data":"7956e4e0d439153bce593f1005f34a0fff7f11b134118fcb2d3dc18c7f781aa3"} Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.132264 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77hfr" podStartSLOduration=2.706352325 podStartE2EDuration="4.132239376s" podCreationTimestamp="2025-11-21 13:36:41 +0000 UTC" firstStartedPulling="2025-11-21 13:36:43.064420394 +0000 UTC m=+277.185952946" lastFinishedPulling="2025-11-21 13:36:44.490307455 +0000 UTC m=+278.611839997" observedRunningTime="2025-11-21 13:36:45.128895161 +0000 UTC m=+279.250427723" watchObservedRunningTime="2025-11-21 13:36:45.132239376 +0000 UTC m=+279.253771938" Nov 21 13:36:45 crc kubenswrapper[4904]: I1121 13:36:45.155172 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b7jgw" podStartSLOduration=2.757646189 podStartE2EDuration="4.155148802s" podCreationTimestamp="2025-11-21 13:36:41 +0000 UTC" firstStartedPulling="2025-11-21 13:36:43.073320543 +0000 UTC m=+277.194853105" lastFinishedPulling="2025-11-21 13:36:44.470823166 +0000 UTC m=+278.592355718" observedRunningTime="2025-11-21 13:36:45.150071313 +0000 UTC m=+279.271603895" watchObservedRunningTime="2025-11-21 13:36:45.155148802 +0000 UTC m=+279.276681354" Nov 21 13:36:49 crc kubenswrapper[4904]: I1121 13:36:49.471523 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:49 crc kubenswrapper[4904]: I1121 13:36:49.472380 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:49 crc kubenswrapper[4904]: I1121 13:36:49.519349 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:49 crc kubenswrapper[4904]: I1121 13:36:49.684453 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:49 crc kubenswrapper[4904]: I1121 13:36:49.684850 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:49 crc kubenswrapper[4904]: I1121 13:36:49.725348 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:50 crc kubenswrapper[4904]: I1121 13:36:50.176069 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7gch5" Nov 21 13:36:50 crc kubenswrapper[4904]: I1121 13:36:50.197525 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nw24d" Nov 21 13:36:51 crc kubenswrapper[4904]: I1121 13:36:51.883036 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:51 crc kubenswrapper[4904]: I1121 13:36:51.883397 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:51 crc kubenswrapper[4904]: I1121 13:36:51.934455 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:52 crc kubenswrapper[4904]: I1121 13:36:52.194023 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b7jgw" Nov 21 13:36:52 crc kubenswrapper[4904]: I1121 13:36:52.381769 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:52 crc kubenswrapper[4904]: I1121 13:36:52.381824 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:52 crc kubenswrapper[4904]: I1121 13:36:52.440326 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:36:53 crc kubenswrapper[4904]: I1121 13:36:53.198534 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77hfr" Nov 21 13:37:58 crc kubenswrapper[4904]: I1121 13:37:58.113980 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:37:58 crc kubenswrapper[4904]: I1121 13:37:58.114562 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:38:28 crc kubenswrapper[4904]: I1121 13:38:28.114179 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:38:28 crc kubenswrapper[4904]: I1121 13:38:28.115144 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.113419 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.114160 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.114241 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.115097 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1eccabc69ccc2202e8628ad2146eaa449cb59e8e720cbf26d256e45f66a151f7"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.115230 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://1eccabc69ccc2202e8628ad2146eaa449cb59e8e720cbf26d256e45f66a151f7" gracePeriod=600 Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.263463 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="1eccabc69ccc2202e8628ad2146eaa449cb59e8e720cbf26d256e45f66a151f7" exitCode=0 Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.263607 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"1eccabc69ccc2202e8628ad2146eaa449cb59e8e720cbf26d256e45f66a151f7"} Nov 21 13:38:58 crc kubenswrapper[4904]: I1121 13:38:58.263869 4904 scope.go:117] "RemoveContainer" containerID="53366796dfb0305b68a4213fc3f9794c437dbb6fd7f0df1e01a773110b572942" Nov 21 13:38:59 crc kubenswrapper[4904]: I1121 13:38:59.277188 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"98139dd02c3ca616ca203db2c2722354a90ab7ddd2cf4f5a2d5c1eb3d2693d7a"} Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.459221 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g7br7"] Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.460973 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.473231 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g7br7"] Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.594815 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-bound-sa-token\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595139 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-registry-certificates\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595266 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-registry-tls\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595386 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595589 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595778 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-trusted-ca\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.595883 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmnr\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-kube-api-access-fdmnr\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.626228 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-registry-tls\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696825 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696869 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696916 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-trusted-ca\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696938 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdmnr\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-kube-api-access-fdmnr\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696963 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-bound-sa-token\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.696989 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-registry-certificates\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.698538 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.699286 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-registry-certificates\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.700054 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-trusted-ca\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.715349 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.722782 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-bound-sa-token\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.726260 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdmnr\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-kube-api-access-fdmnr\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.726591 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0-registry-tls\") pod \"image-registry-66df7c8f76-g7br7\" (UID: \"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0\") " pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:19 crc kubenswrapper[4904]: I1121 13:39:19.781802 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:20 crc kubenswrapper[4904]: I1121 13:39:20.013444 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-g7br7"] Nov 21 13:39:20 crc kubenswrapper[4904]: I1121 13:39:20.417117 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" event={"ID":"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0","Type":"ContainerStarted","Data":"fa0649b4d432ea1002729af786651e584626d7203bfba8fafff55c8e8901aeaf"} Nov 21 13:39:20 crc kubenswrapper[4904]: I1121 13:39:20.417174 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" event={"ID":"e1392fc0-7443-4d76-b2c3-4a0f1d0ec3c0","Type":"ContainerStarted","Data":"3dcb017fcef0ca690b40f233daeb9aad91236b48ea62cf591f1d3118ecd9dbff"} Nov 21 13:39:20 crc kubenswrapper[4904]: I1121 13:39:20.438740 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" podStartSLOduration=1.438720517 podStartE2EDuration="1.438720517s" podCreationTimestamp="2025-11-21 13:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:39:20.437129567 +0000 UTC m=+434.558662119" watchObservedRunningTime="2025-11-21 13:39:20.438720517 +0000 UTC m=+434.560253069" Nov 21 13:39:21 crc kubenswrapper[4904]: I1121 13:39:21.422874 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:39 crc kubenswrapper[4904]: I1121 13:39:39.788058 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-g7br7" Nov 21 13:39:39 crc kubenswrapper[4904]: I1121 13:39:39.874272 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g96sr"] Nov 21 13:40:04 crc kubenswrapper[4904]: I1121 13:40:04.916908 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" podUID="db301386-8de5-4553-a48d-fd858d4fc6f9" containerName="registry" containerID="cri-o://1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405" gracePeriod=30 Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.339338 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.409285 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-trusted-ca\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.409373 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t2q4\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-kube-api-access-2t2q4\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.409469 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-bound-sa-token\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.409520 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db301386-8de5-4553-a48d-fd858d4fc6f9-installation-pull-secrets\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.409744 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-certificates\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.409950 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.410032 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-tls\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.410103 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db301386-8de5-4553-a48d-fd858d4fc6f9-ca-trust-extracted\") pod \"db301386-8de5-4553-a48d-fd858d4fc6f9\" (UID: \"db301386-8de5-4553-a48d-fd858d4fc6f9\") " Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.411151 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.411277 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.417890 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-kube-api-access-2t2q4" (OuterVolumeSpecName: "kube-api-access-2t2q4") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "kube-api-access-2t2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.418107 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.418304 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.419011 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db301386-8de5-4553-a48d-fd858d4fc6f9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.428912 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.433874 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db301386-8de5-4553-a48d-fd858d4fc6f9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "db301386-8de5-4553-a48d-fd858d4fc6f9" (UID: "db301386-8de5-4553-a48d-fd858d4fc6f9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511704 4904 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/db301386-8de5-4553-a48d-fd858d4fc6f9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511776 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511788 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t2q4\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-kube-api-access-2t2q4\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511804 4904 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511821 4904 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/db301386-8de5-4553-a48d-fd858d4fc6f9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511832 4904 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.511843 4904 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/db301386-8de5-4553-a48d-fd858d4fc6f9-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.733305 4904 generic.go:334] "Generic (PLEG): container finished" podID="db301386-8de5-4553-a48d-fd858d4fc6f9" containerID="1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405" exitCode=0 Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.733383 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" event={"ID":"db301386-8de5-4553-a48d-fd858d4fc6f9","Type":"ContainerDied","Data":"1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405"} Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.733442 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" event={"ID":"db301386-8de5-4553-a48d-fd858d4fc6f9","Type":"ContainerDied","Data":"cd741800a8717775e44496673e389b73a334791418337664a5dbc0c8ddace904"} Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.733437 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g96sr" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.733497 4904 scope.go:117] "RemoveContainer" containerID="1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.772790 4904 scope.go:117] "RemoveContainer" containerID="1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405" Nov 21 13:40:05 crc kubenswrapper[4904]: E1121 13:40:05.773510 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405\": container with ID starting with 1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405 not found: ID does not exist" containerID="1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.773747 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405"} err="failed to get container status \"1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405\": rpc error: code = NotFound desc = could not find container \"1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405\": container with ID starting with 1b96825d0e72964bc2ca3e81157953aa05f590177d01a574beb8526ffaf9f405 not found: ID does not exist" Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.790713 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g96sr"] Nov 21 13:40:05 crc kubenswrapper[4904]: I1121 13:40:05.796832 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g96sr"] Nov 21 13:40:06 crc kubenswrapper[4904]: I1121 13:40:06.525507 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db301386-8de5-4553-a48d-fd858d4fc6f9" path="/var/lib/kubelet/pods/db301386-8de5-4553-a48d-fd858d4fc6f9/volumes" Nov 21 13:40:58 crc kubenswrapper[4904]: I1121 13:40:58.114297 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:40:58 crc kubenswrapper[4904]: I1121 13:40:58.115210 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:41:28 crc kubenswrapper[4904]: I1121 13:41:28.114145 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:41:28 crc kubenswrapper[4904]: I1121 13:41:28.114814 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.114116 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.115063 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.115154 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.116087 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"98139dd02c3ca616ca203db2c2722354a90ab7ddd2cf4f5a2d5c1eb3d2693d7a"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.116163 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://98139dd02c3ca616ca203db2c2722354a90ab7ddd2cf4f5a2d5c1eb3d2693d7a" gracePeriod=600 Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.719898 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="98139dd02c3ca616ca203db2c2722354a90ab7ddd2cf4f5a2d5c1eb3d2693d7a" exitCode=0 Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.720349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"98139dd02c3ca616ca203db2c2722354a90ab7ddd2cf4f5a2d5c1eb3d2693d7a"} Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.720391 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"34e69668c2ea09c1fa2e0c0fbb6545e8671e2e8710d41559fb5aa9d7200b9106"} Nov 21 13:41:58 crc kubenswrapper[4904]: I1121 13:41:58.720415 4904 scope.go:117] "RemoveContainer" containerID="1eccabc69ccc2202e8628ad2146eaa449cb59e8e720cbf26d256e45f66a151f7" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.422284 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l"] Nov 21 13:43:11 crc kubenswrapper[4904]: E1121 13:43:11.423040 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db301386-8de5-4553-a48d-fd858d4fc6f9" containerName="registry" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.423055 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="db301386-8de5-4553-a48d-fd858d4fc6f9" containerName="registry" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.423182 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="db301386-8de5-4553-a48d-fd858d4fc6f9" containerName="registry" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.424146 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.427006 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.435529 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l"] Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.567208 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mbf\" (UniqueName: \"kubernetes.io/projected/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-kube-api-access-k9mbf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.567970 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.568106 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.669637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.669753 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9mbf\" (UniqueName: \"kubernetes.io/projected/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-kube-api-access-k9mbf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.669812 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.670304 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.670683 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.693160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9mbf\" (UniqueName: \"kubernetes.io/projected/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-kube-api-access-k9mbf\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.755407 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:11 crc kubenswrapper[4904]: I1121 13:43:11.971283 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l"] Nov 21 13:43:12 crc kubenswrapper[4904]: I1121 13:43:12.228758 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" event={"ID":"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c","Type":"ContainerStarted","Data":"56b8f164e837561d0bdad29d75e16eed92d981cbddf7ba10d232129a3566ec61"} Nov 21 13:43:12 crc kubenswrapper[4904]: I1121 13:43:12.228811 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" event={"ID":"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c","Type":"ContainerStarted","Data":"3391b66d0f690450b77498e4d097c9847533e5e59f064c7b05c4fa7d6874d302"} Nov 21 13:43:13 crc kubenswrapper[4904]: I1121 13:43:13.236021 4904 generic.go:334] "Generic (PLEG): container finished" podID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerID="56b8f164e837561d0bdad29d75e16eed92d981cbddf7ba10d232129a3566ec61" exitCode=0 Nov 21 13:43:13 crc kubenswrapper[4904]: I1121 13:43:13.236088 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" event={"ID":"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c","Type":"ContainerDied","Data":"56b8f164e837561d0bdad29d75e16eed92d981cbddf7ba10d232129a3566ec61"} Nov 21 13:43:13 crc kubenswrapper[4904]: I1121 13:43:13.238584 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:43:15 crc kubenswrapper[4904]: I1121 13:43:15.252054 4904 generic.go:334] "Generic (PLEG): container finished" podID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerID="d4167efbe7d902f5ef519b35dd43d13c0b0926382166b3ac818d8981c2537d7a" exitCode=0 Nov 21 13:43:15 crc kubenswrapper[4904]: I1121 13:43:15.252126 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" event={"ID":"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c","Type":"ContainerDied","Data":"d4167efbe7d902f5ef519b35dd43d13c0b0926382166b3ac818d8981c2537d7a"} Nov 21 13:43:16 crc kubenswrapper[4904]: I1121 13:43:16.263783 4904 generic.go:334] "Generic (PLEG): container finished" podID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerID="d4c8c62ef28ae8caa610ad42a55716947d6bec648e8da05aa6762c8cd13b4b83" exitCode=0 Nov 21 13:43:16 crc kubenswrapper[4904]: I1121 13:43:16.263828 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" event={"ID":"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c","Type":"ContainerDied","Data":"d4c8c62ef28ae8caa610ad42a55716947d6bec648e8da05aa6762c8cd13b4b83"} Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.532467 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.695271 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-util\") pod \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.695381 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9mbf\" (UniqueName: \"kubernetes.io/projected/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-kube-api-access-k9mbf\") pod \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.695450 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-bundle\") pod \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\" (UID: \"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c\") " Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.698228 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-bundle" (OuterVolumeSpecName: "bundle") pod "64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" (UID: "64f48be9-6d2a-4c5f-adb2-6b2bea485f9c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.706244 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-kube-api-access-k9mbf" (OuterVolumeSpecName: "kube-api-access-k9mbf") pod "64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" (UID: "64f48be9-6d2a-4c5f-adb2-6b2bea485f9c"). InnerVolumeSpecName "kube-api-access-k9mbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.715060 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-util" (OuterVolumeSpecName: "util") pod "64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" (UID: "64f48be9-6d2a-4c5f-adb2-6b2bea485f9c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.796613 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.796680 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9mbf\" (UniqueName: \"kubernetes.io/projected/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-kube-api-access-k9mbf\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:17 crc kubenswrapper[4904]: I1121 13:43:17.796691 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/64f48be9-6d2a-4c5f-adb2-6b2bea485f9c-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:18 crc kubenswrapper[4904]: I1121 13:43:18.280858 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" event={"ID":"64f48be9-6d2a-4c5f-adb2-6b2bea485f9c","Type":"ContainerDied","Data":"3391b66d0f690450b77498e4d097c9847533e5e59f064c7b05c4fa7d6874d302"} Nov 21 13:43:18 crc kubenswrapper[4904]: I1121 13:43:18.280919 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3391b66d0f690450b77498e4d097c9847533e5e59f064c7b05c4fa7d6874d302" Nov 21 13:43:18 crc kubenswrapper[4904]: I1121 13:43:18.281047 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l" Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.301805 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-txkm2"] Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.302812 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-controller" containerID="cri-o://5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.302895 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="northd" containerID="cri-o://d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.302956 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-node" containerID="cri-o://df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.302970 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-acl-logging" containerID="cri-o://d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.303115 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="sbdb" containerID="cri-o://899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.303136 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.303154 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="nbdb" containerID="cri-o://650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" gracePeriod=30 Nov 21 13:43:28 crc kubenswrapper[4904]: I1121 13:43:28.376899 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" containerID="cri-o://fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" gracePeriod=30 Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.036828 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn"] Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.037071 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="pull" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.037084 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="pull" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.037095 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="util" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.037101 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="util" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.037108 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="extract" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.037114 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="extract" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.037217 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f48be9-6d2a-4c5f-adb2-6b2bea485f9c" containerName="extract" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.037596 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.042405 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-lq2q5" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.042405 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.043320 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.108236 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p"] Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.109034 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.111537 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4tjk5" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.111953 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.127488 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7"] Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.128719 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.181704 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46d5\" (UniqueName: \"kubernetes.io/projected/c4f53eb1-20cb-4196-89e2-197cecdacc6c-kube-api-access-z46d5\") pod \"obo-prometheus-operator-668cf9dfbb-gg6qn\" (UID: \"c4f53eb1-20cb-4196-89e2-197cecdacc6c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.271723 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-l4jvh"] Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.272509 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.274273 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-grwqh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.274698 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.282621 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d488c9a-40e8-4cce-a260-c4610af92de8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p\" (UID: \"3d488c9a-40e8-4cce-a260-c4610af92de8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.282692 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f97ec95-046c-4a0e-9ebf-3baf5fed1053-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7\" (UID: \"7f97ec95-046c-4a0e-9ebf-3baf5fed1053\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.282739 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46d5\" (UniqueName: \"kubernetes.io/projected/c4f53eb1-20cb-4196-89e2-197cecdacc6c-kube-api-access-z46d5\") pod \"obo-prometheus-operator-668cf9dfbb-gg6qn\" (UID: \"c4f53eb1-20cb-4196-89e2-197cecdacc6c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.282830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f97ec95-046c-4a0e-9ebf-3baf5fed1053-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7\" (UID: \"7f97ec95-046c-4a0e-9ebf-3baf5fed1053\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.282856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d488c9a-40e8-4cce-a260-c4610af92de8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p\" (UID: \"3d488c9a-40e8-4cce-a260-c4610af92de8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.309761 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46d5\" (UniqueName: \"kubernetes.io/projected/c4f53eb1-20cb-4196-89e2-197cecdacc6c-kube-api-access-z46d5\") pod \"obo-prometheus-operator-668cf9dfbb-gg6qn\" (UID: \"c4f53eb1-20cb-4196-89e2-197cecdacc6c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.355807 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.379394 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/3.log" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.381833 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovn-acl-logging/0.log" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.382493 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovn-controller/0.log" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383155 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" exitCode=0 Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383180 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" exitCode=0 Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383190 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" exitCode=143 Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383198 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" exitCode=143 Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383224 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f"} Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383308 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3"} Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383320 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e"} Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383331 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403"} Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383634 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xnh\" (UniqueName: \"kubernetes.io/projected/ebdd832d-d268-472a-b067-d61a6c520b7f-kube-api-access-25xnh\") pod \"observability-operator-d8bb48f5d-l4jvh\" (UID: \"ebdd832d-d268-472a-b067-d61a6c520b7f\") " pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383703 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f97ec95-046c-4a0e-9ebf-3baf5fed1053-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7\" (UID: \"7f97ec95-046c-4a0e-9ebf-3baf5fed1053\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383736 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d488c9a-40e8-4cce-a260-c4610af92de8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p\" (UID: \"3d488c9a-40e8-4cce-a260-c4610af92de8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383761 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebdd832d-d268-472a-b067-d61a6c520b7f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-l4jvh\" (UID: \"ebdd832d-d268-472a-b067-d61a6c520b7f\") " pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383785 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d488c9a-40e8-4cce-a260-c4610af92de8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p\" (UID: \"3d488c9a-40e8-4cce-a260-c4610af92de8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.383802 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f97ec95-046c-4a0e-9ebf-3baf5fed1053-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7\" (UID: \"7f97ec95-046c-4a0e-9ebf-3baf5fed1053\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.387800 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(cb394b01241cb3aca398ae16c5af4cff5ffce80dd6ec441ad50e70e03a425e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.387894 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(cb394b01241cb3aca398ae16c5af4cff5ffce80dd6ec441ad50e70e03a425e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.387924 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(cb394b01241cb3aca398ae16c5af4cff5ffce80dd6ec441ad50e70e03a425e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.387980 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators(c4f53eb1-20cb-4196-89e2-197cecdacc6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators(c4f53eb1-20cb-4196-89e2-197cecdacc6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(cb394b01241cb3aca398ae16c5af4cff5ffce80dd6ec441ad50e70e03a425e7e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" podUID="c4f53eb1-20cb-4196-89e2-197cecdacc6c" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.389431 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d488c9a-40e8-4cce-a260-c4610af92de8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p\" (UID: \"3d488c9a-40e8-4cce-a260-c4610af92de8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.392892 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f97ec95-046c-4a0e-9ebf-3baf5fed1053-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7\" (UID: \"7f97ec95-046c-4a0e-9ebf-3baf5fed1053\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.393397 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f97ec95-046c-4a0e-9ebf-3baf5fed1053-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7\" (UID: \"7f97ec95-046c-4a0e-9ebf-3baf5fed1053\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.394467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d488c9a-40e8-4cce-a260-c4610af92de8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p\" (UID: \"3d488c9a-40e8-4cce-a260-c4610af92de8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.398376 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-pqc2t"] Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.399280 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:29 crc kubenswrapper[4904]: W1121 13:43:29.402269 4904 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-jgl4z": failed to list *v1.Secret: secrets "perses-operator-dockercfg-jgl4z" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.402386 4904 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-jgl4z\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"perses-operator-dockercfg-jgl4z\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.464313 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.470967 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.484899 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebdd832d-d268-472a-b067-d61a6c520b7f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-l4jvh\" (UID: \"ebdd832d-d268-472a-b067-d61a6c520b7f\") " pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.485092 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25xnh\" (UniqueName: \"kubernetes.io/projected/ebdd832d-d268-472a-b067-d61a6c520b7f-kube-api-access-25xnh\") pod \"observability-operator-d8bb48f5d-l4jvh\" (UID: \"ebdd832d-d268-472a-b067-d61a6c520b7f\") " pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.489143 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ebdd832d-d268-472a-b067-d61a6c520b7f-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-l4jvh\" (UID: \"ebdd832d-d268-472a-b067-d61a6c520b7f\") " pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.505858 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(a2a98de964885058094a951693c54d3dc5b30fe098bb8332986ca27e94f430cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.505932 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(a2a98de964885058094a951693c54d3dc5b30fe098bb8332986ca27e94f430cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.505954 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(a2a98de964885058094a951693c54d3dc5b30fe098bb8332986ca27e94f430cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.506003 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators(3d488c9a-40e8-4cce-a260-c4610af92de8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators(3d488c9a-40e8-4cce-a260-c4610af92de8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(a2a98de964885058094a951693c54d3dc5b30fe098bb8332986ca27e94f430cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" podUID="3d488c9a-40e8-4cce-a260-c4610af92de8" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.510419 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25xnh\" (UniqueName: \"kubernetes.io/projected/ebdd832d-d268-472a-b067-d61a6c520b7f-kube-api-access-25xnh\") pod \"observability-operator-d8bb48f5d-l4jvh\" (UID: \"ebdd832d-d268-472a-b067-d61a6c520b7f\") " pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.540288 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(54edc9de57fd5bf22c504394a07294ee4fa1b53b0721edbaa1da9504ffcb6108): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.540346 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(54edc9de57fd5bf22c504394a07294ee4fa1b53b0721edbaa1da9504ffcb6108): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.540371 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(54edc9de57fd5bf22c504394a07294ee4fa1b53b0721edbaa1da9504ffcb6108): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.540412 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators(7f97ec95-046c-4a0e-9ebf-3baf5fed1053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators(7f97ec95-046c-4a0e-9ebf-3baf5fed1053)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(54edc9de57fd5bf22c504394a07294ee4fa1b53b0721edbaa1da9504ffcb6108): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" podUID="7f97ec95-046c-4a0e-9ebf-3baf5fed1053" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.586753 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbhgd\" (UniqueName: \"kubernetes.io/projected/2d39b52b-adba-414a-ba10-66181feecef9-kube-api-access-zbhgd\") pod \"perses-operator-5446b9c989-pqc2t\" (UID: \"2d39b52b-adba-414a-ba10-66181feecef9\") " pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.587077 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.587204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d39b52b-adba-414a-ba10-66181feecef9-openshift-service-ca\") pod \"perses-operator-5446b9c989-pqc2t\" (UID: \"2d39b52b-adba-414a-ba10-66181feecef9\") " pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.608692 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(bcb417ddca7cd73960fc20eb1a0452ccbdb32393d1857f87ce8442a2542bf891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.608758 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(bcb417ddca7cd73960fc20eb1a0452ccbdb32393d1857f87ce8442a2542bf891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.608782 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(bcb417ddca7cd73960fc20eb1a0452ccbdb32393d1857f87ce8442a2542bf891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:29 crc kubenswrapper[4904]: E1121 13:43:29.608832 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-l4jvh_openshift-operators(ebdd832d-d268-472a-b067-d61a6c520b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-l4jvh_openshift-operators(ebdd832d-d268-472a-b067-d61a6c520b7f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(bcb417ddca7cd73960fc20eb1a0452ccbdb32393d1857f87ce8442a2542bf891): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" podUID="ebdd832d-d268-472a-b067-d61a6c520b7f" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.689302 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbhgd\" (UniqueName: \"kubernetes.io/projected/2d39b52b-adba-414a-ba10-66181feecef9-kube-api-access-zbhgd\") pod \"perses-operator-5446b9c989-pqc2t\" (UID: \"2d39b52b-adba-414a-ba10-66181feecef9\") " pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.689429 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d39b52b-adba-414a-ba10-66181feecef9-openshift-service-ca\") pod \"perses-operator-5446b9c989-pqc2t\" (UID: \"2d39b52b-adba-414a-ba10-66181feecef9\") " pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.690351 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d39b52b-adba-414a-ba10-66181feecef9-openshift-service-ca\") pod \"perses-operator-5446b9c989-pqc2t\" (UID: \"2d39b52b-adba-414a-ba10-66181feecef9\") " pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:29 crc kubenswrapper[4904]: I1121 13:43:29.708304 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbhgd\" (UniqueName: \"kubernetes.io/projected/2d39b52b-adba-414a-ba10-66181feecef9-kube-api-access-zbhgd\") pod \"perses-operator-5446b9c989-pqc2t\" (UID: \"2d39b52b-adba-414a-ba10-66181feecef9\") " pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.355806 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/3.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.359394 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovn-acl-logging/0.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.360275 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovn-controller/0.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.360848 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.391407 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovnkube-controller/3.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.393680 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovn-acl-logging/0.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394142 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txkm2_349c3b8f-5311-4171-ade5-ce7db3d118ad/ovn-controller/0.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394549 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" exitCode=0 Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394575 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" exitCode=0 Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394583 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" exitCode=0 Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394592 4904 generic.go:334] "Generic (PLEG): container finished" podID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" exitCode=0 Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394646 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66"} Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394710 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6"} Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394729 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9"} Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394740 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa"} Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394749 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394770 4904 scope.go:117] "RemoveContainer" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.394756 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txkm2" event={"ID":"349c3b8f-5311-4171-ade5-ce7db3d118ad","Type":"ContainerDied","Data":"24d8c7eed4849f77f1d9f52cad612a92c215330a1faf9455737e9a369683c78c"} Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.398325 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/2.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.398876 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/1.log" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.398907 4904 generic.go:334] "Generic (PLEG): container finished" podID="190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a" containerID="e7bcb85c4309dddf373567192fb1362a6c42fb2260c7c8101594a97cf2d7016d" exitCode=2 Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.398932 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerDied","Data":"e7bcb85c4309dddf373567192fb1362a6c42fb2260c7c8101594a97cf2d7016d"} Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.399429 4904 scope.go:117] "RemoveContainer" containerID="e7bcb85c4309dddf373567192fb1362a6c42fb2260c7c8101594a97cf2d7016d" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.399744 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kgngm_openshift-multus(190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a)\"" pod="openshift-multus/multus-kgngm" podUID="190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.425959 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.452761 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-75c2m"] Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453069 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453088 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453097 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453106 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453115 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453122 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453131 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453138 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453145 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="nbdb" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453151 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="nbdb" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453162 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-acl-logging" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453167 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-acl-logging" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453177 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453183 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453192 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kubecfg-setup" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453199 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kubecfg-setup" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453209 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453220 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453228 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-node" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453234 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-node" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453241 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="sbdb" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453247 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="sbdb" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453256 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="northd" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453262 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="northd" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453364 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453374 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453383 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="northd" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453392 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="nbdb" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453400 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453406 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453414 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-acl-logging" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453422 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovn-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453430 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="sbdb" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453438 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-node" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453447 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="kube-rbac-proxy-ovn-metrics" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.453540 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453549 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.453900 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" containerName="ovnkube-controller" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.455507 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.460194 4904 scope.go:117] "RemoveContainer" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.485570 4904 scope.go:117] "RemoveContainer" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499172 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-node-log\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499223 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-openvswitch\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499258 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-slash\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499296 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-658bx\" (UniqueName: \"kubernetes.io/projected/349c3b8f-5311-4171-ade5-ce7db3d118ad-kube-api-access-658bx\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499321 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-netd\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499342 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-config\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499364 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499381 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-log-socket\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499405 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-env-overrides\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499440 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-var-lib-openvswitch\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499457 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-bin\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499477 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-systemd-units\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499500 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovn-node-metrics-cert\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499518 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-ovn\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499560 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-systemd\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499626 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-netns\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499644 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-ovn-kubernetes\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499679 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-script-lib\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499708 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-kubelet\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.499726 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-etc-openvswitch\") pod \"349c3b8f-5311-4171-ade5-ce7db3d118ad\" (UID: \"349c3b8f-5311-4171-ade5-ce7db3d118ad\") " Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.500288 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-node-log" (OuterVolumeSpecName: "node-log") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.500322 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.500343 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-slash" (OuterVolumeSpecName: "host-slash") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.501104 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-log-socket" (OuterVolumeSpecName: "log-socket") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.501251 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.501705 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.501803 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.501874 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.501950 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502020 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502233 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502260 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502316 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502316 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502340 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502357 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.502385 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.510501 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349c3b8f-5311-4171-ade5-ce7db3d118ad-kube-api-access-658bx" (OuterVolumeSpecName: "kube-api-access-658bx") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "kube-api-access-658bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.511834 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.515435 4904 scope.go:117] "RemoveContainer" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.547359 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "349c3b8f-5311-4171-ade5-ce7db3d118ad" (UID: "349c3b8f-5311-4171-ade5-ce7db3d118ad"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.549642 4904 scope.go:117] "RemoveContainer" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.567154 4904 scope.go:117] "RemoveContainer" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.590250 4904 scope.go:117] "RemoveContainer" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601009 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-env-overrides\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601080 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-slash\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601108 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-run-ovn-kubernetes\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601137 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-kubelet\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601176 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovnkube-script-lib\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601201 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-node-log\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601222 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-log-socket\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601253 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-etc-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601277 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-systemd\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601307 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-ovn\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601333 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-systemd-units\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601363 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovn-node-metrics-cert\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601389 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jcp\" (UniqueName: \"kubernetes.io/projected/c3e10385-1e9b-4b11-8370-c39b2406e50b-kube-api-access-h5jcp\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601429 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-var-lib-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601451 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovnkube-config\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601475 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601498 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601523 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-cni-bin\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601544 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-cni-netd\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601568 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-run-netns\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601616 4904 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601633 4904 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601645 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601683 4904 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601696 4904 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601708 4904 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-node-log\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601719 4904 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601732 4904 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-slash\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601743 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-658bx\" (UniqueName: \"kubernetes.io/projected/349c3b8f-5311-4171-ade5-ce7db3d118ad-kube-api-access-658bx\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601754 4904 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601763 4904 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601774 4904 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601786 4904 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-log-socket\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601800 4904 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/349c3b8f-5311-4171-ade5-ce7db3d118ad-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601814 4904 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601827 4904 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601837 4904 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601847 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/349c3b8f-5311-4171-ade5-ce7db3d118ad-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601858 4904 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.601868 4904 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/349c3b8f-5311-4171-ade5-ce7db3d118ad-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.608171 4904 scope.go:117] "RemoveContainer" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.628647 4904 scope.go:117] "RemoveContainer" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.638506 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jgl4z" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.644440 4904 scope.go:117] "RemoveContainer" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.644873 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.645021 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": container with ID starting with fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66 not found: ID does not exist" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.645064 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66"} err="failed to get container status \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": rpc error: code = NotFound desc = could not find container \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": container with ID starting with fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.645101 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.646932 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": container with ID starting with 8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415 not found: ID does not exist" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.646987 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415"} err="failed to get container status \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": rpc error: code = NotFound desc = could not find container \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": container with ID starting with 8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.647030 4904 scope.go:117] "RemoveContainer" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.647441 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": container with ID starting with 899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6 not found: ID does not exist" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.647488 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6"} err="failed to get container status \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": rpc error: code = NotFound desc = could not find container \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": container with ID starting with 899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.647518 4904 scope.go:117] "RemoveContainer" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.647836 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": container with ID starting with 650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9 not found: ID does not exist" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.647858 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9"} err="failed to get container status \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": rpc error: code = NotFound desc = could not find container \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": container with ID starting with 650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.647874 4904 scope.go:117] "RemoveContainer" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.648303 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": container with ID starting with d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa not found: ID does not exist" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.648334 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa"} err="failed to get container status \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": rpc error: code = NotFound desc = could not find container \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": container with ID starting with d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.648354 4904 scope.go:117] "RemoveContainer" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.648679 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": container with ID starting with 3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f not found: ID does not exist" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.648713 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f"} err="failed to get container status \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": rpc error: code = NotFound desc = could not find container \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": container with ID starting with 3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.648732 4904 scope.go:117] "RemoveContainer" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.648976 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": container with ID starting with df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3 not found: ID does not exist" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649003 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3"} err="failed to get container status \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": rpc error: code = NotFound desc = could not find container \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": container with ID starting with df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649029 4904 scope.go:117] "RemoveContainer" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.649230 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": container with ID starting with d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e not found: ID does not exist" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649252 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e"} err="failed to get container status \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": rpc error: code = NotFound desc = could not find container \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": container with ID starting with d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649317 4904 scope.go:117] "RemoveContainer" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.649509 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": container with ID starting with 5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403 not found: ID does not exist" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649533 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403"} err="failed to get container status \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": rpc error: code = NotFound desc = could not find container \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": container with ID starting with 5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649547 4904 scope.go:117] "RemoveContainer" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.649737 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": container with ID starting with 213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0 not found: ID does not exist" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649757 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0"} err="failed to get container status \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": rpc error: code = NotFound desc = could not find container \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": container with ID starting with 213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.649770 4904 scope.go:117] "RemoveContainer" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.653982 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66"} err="failed to get container status \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": rpc error: code = NotFound desc = could not find container \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": container with ID starting with fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654018 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654344 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415"} err="failed to get container status \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": rpc error: code = NotFound desc = could not find container \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": container with ID starting with 8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654365 4904 scope.go:117] "RemoveContainer" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654580 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6"} err="failed to get container status \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": rpc error: code = NotFound desc = could not find container \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": container with ID starting with 899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654604 4904 scope.go:117] "RemoveContainer" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654836 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9"} err="failed to get container status \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": rpc error: code = NotFound desc = could not find container \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": container with ID starting with 650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.654861 4904 scope.go:117] "RemoveContainer" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.655244 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa"} err="failed to get container status \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": rpc error: code = NotFound desc = could not find container \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": container with ID starting with d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.655266 4904 scope.go:117] "RemoveContainer" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.655518 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f"} err="failed to get container status \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": rpc error: code = NotFound desc = could not find container \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": container with ID starting with 3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.655538 4904 scope.go:117] "RemoveContainer" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.655920 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3"} err="failed to get container status \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": rpc error: code = NotFound desc = could not find container \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": container with ID starting with df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.655967 4904 scope.go:117] "RemoveContainer" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656208 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e"} err="failed to get container status \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": rpc error: code = NotFound desc = could not find container \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": container with ID starting with d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656223 4904 scope.go:117] "RemoveContainer" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656483 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403"} err="failed to get container status \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": rpc error: code = NotFound desc = could not find container \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": container with ID starting with 5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656497 4904 scope.go:117] "RemoveContainer" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656728 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0"} err="failed to get container status \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": rpc error: code = NotFound desc = could not find container \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": container with ID starting with 213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656742 4904 scope.go:117] "RemoveContainer" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.656998 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66"} err="failed to get container status \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": rpc error: code = NotFound desc = could not find container \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": container with ID starting with fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657015 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657289 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415"} err="failed to get container status \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": rpc error: code = NotFound desc = could not find container \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": container with ID starting with 8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657311 4904 scope.go:117] "RemoveContainer" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657538 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6"} err="failed to get container status \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": rpc error: code = NotFound desc = could not find container \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": container with ID starting with 899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657559 4904 scope.go:117] "RemoveContainer" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657804 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9"} err="failed to get container status \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": rpc error: code = NotFound desc = could not find container \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": container with ID starting with 650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.657823 4904 scope.go:117] "RemoveContainer" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658044 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa"} err="failed to get container status \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": rpc error: code = NotFound desc = could not find container \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": container with ID starting with d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658065 4904 scope.go:117] "RemoveContainer" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658264 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f"} err="failed to get container status \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": rpc error: code = NotFound desc = could not find container \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": container with ID starting with 3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658279 4904 scope.go:117] "RemoveContainer" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658475 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3"} err="failed to get container status \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": rpc error: code = NotFound desc = could not find container \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": container with ID starting with df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658487 4904 scope.go:117] "RemoveContainer" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658700 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e"} err="failed to get container status \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": rpc error: code = NotFound desc = could not find container \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": container with ID starting with d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658713 4904 scope.go:117] "RemoveContainer" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658905 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403"} err="failed to get container status \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": rpc error: code = NotFound desc = could not find container \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": container with ID starting with 5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.658918 4904 scope.go:117] "RemoveContainer" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.659971 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0"} err="failed to get container status \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": rpc error: code = NotFound desc = could not find container \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": container with ID starting with 213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.659990 4904 scope.go:117] "RemoveContainer" containerID="fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.660979 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66"} err="failed to get container status \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": rpc error: code = NotFound desc = could not find container \"fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66\": container with ID starting with fe49eed2a9593c345a54235ac3660207468706cdf533c7056452024e6e2aef66 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.661001 4904 scope.go:117] "RemoveContainer" containerID="8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.664120 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415"} err="failed to get container status \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": rpc error: code = NotFound desc = could not find container \"8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415\": container with ID starting with 8ad5ee1dd08c2800c969a82ca13cd7c76638eab86514be8c4d81dafec558d415 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.664142 4904 scope.go:117] "RemoveContainer" containerID="899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.664394 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6"} err="failed to get container status \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": rpc error: code = NotFound desc = could not find container \"899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6\": container with ID starting with 899a3b4625856c4a931653d9b874ac7764f11b38d85869b948432ca5725bcee6 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.664408 4904 scope.go:117] "RemoveContainer" containerID="650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.664639 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9"} err="failed to get container status \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": rpc error: code = NotFound desc = could not find container \"650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9\": container with ID starting with 650259d36db2397011e0eb87a59d763092aa872533b42621519493d89dbfb5b9 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.664678 4904 scope.go:117] "RemoveContainer" containerID="d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.665887 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa"} err="failed to get container status \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": rpc error: code = NotFound desc = could not find container \"d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa\": container with ID starting with d1f68e582986d36f554a954ee135170a0aa788371d0dbee026cb3f0173a89cfa not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.665915 4904 scope.go:117] "RemoveContainer" containerID="3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.666122 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f"} err="failed to get container status \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": rpc error: code = NotFound desc = could not find container \"3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f\": container with ID starting with 3c5f4863435c24a7449c09a7c371ab3aebb92eb20e72392ecf847d20fe52570f not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.666142 4904 scope.go:117] "RemoveContainer" containerID="df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.666406 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3"} err="failed to get container status \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": rpc error: code = NotFound desc = could not find container \"df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3\": container with ID starting with df36b6ae705285f58cbf43fd69cb0a206dd6694b75e68a504604bf1f3e4f1ba3 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.666439 4904 scope.go:117] "RemoveContainer" containerID="d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.666837 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e"} err="failed to get container status \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": rpc error: code = NotFound desc = could not find container \"d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e\": container with ID starting with d809e6d70d5cf50f5d4601c72a6e6561659c5fdb894232b6c86a85e4520ccd1e not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.666868 4904 scope.go:117] "RemoveContainer" containerID="5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.667150 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403"} err="failed to get container status \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": rpc error: code = NotFound desc = could not find container \"5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403\": container with ID starting with 5c440618586a5444ab0bca3ad4439c246ced0d1fddae9bc79810769f92bd3403 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.667182 4904 scope.go:117] "RemoveContainer" containerID="213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.667432 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0"} err="failed to get container status \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": rpc error: code = NotFound desc = could not find container \"213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0\": container with ID starting with 213b6409924d9d4046ffa26fcba89bd0c64c834c624bdb486a3a663abfea28c0 not found: ID does not exist" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.667453 4904 scope.go:117] "RemoveContainer" containerID="d94466c7d9d870ec2561362ad65a68514c5d51cb54d56bc0ca05aaf6dabbd5a2" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.671737 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(b49985648363dd116b6a75e0f22cec603ab27ef549ee6cd1563c64938efcb5f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.671786 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(b49985648363dd116b6a75e0f22cec603ab27ef549ee6cd1563c64938efcb5f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.671808 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(b49985648363dd116b6a75e0f22cec603ab27ef549ee6cd1563c64938efcb5f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:30 crc kubenswrapper[4904]: E1121 13:43:30.671913 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-pqc2t_openshift-operators(2d39b52b-adba-414a-ba10-66181feecef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-pqc2t_openshift-operators(2d39b52b-adba-414a-ba10-66181feecef9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(b49985648363dd116b6a75e0f22cec603ab27ef549ee6cd1563c64938efcb5f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" podUID="2d39b52b-adba-414a-ba10-66181feecef9" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703288 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-slash\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703348 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-run-ovn-kubernetes\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703393 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-kubelet\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovnkube-script-lib\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703445 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-node-log\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703447 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-slash\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703460 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-log-socket\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-log-socket\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703552 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-run-ovn-kubernetes\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703586 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-etc-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703623 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-systemd\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703743 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-ovn\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703799 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-systemd-units\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703827 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovn-node-metrics-cert\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703860 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jcp\" (UniqueName: \"kubernetes.io/projected/c3e10385-1e9b-4b11-8370-c39b2406e50b-kube-api-access-h5jcp\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.703997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-var-lib-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704030 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovnkube-config\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704048 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704075 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704092 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-run-netns\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704111 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-cni-bin\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704129 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-cni-netd\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704152 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-env-overrides\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704533 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-kubelet\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704839 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-env-overrides\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704893 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-ovn\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704899 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-etc-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704986 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-var-lib-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705256 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovnkube-script-lib\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705261 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-systemd\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705329 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-run-netns\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-run-openvswitch\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovnkube-config\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705460 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-cni-bin\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705490 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-cni-netd\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.704844 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-node-log\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.705542 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c3e10385-1e9b-4b11-8370-c39b2406e50b-systemd-units\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.709265 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c3e10385-1e9b-4b11-8370-c39b2406e50b-ovn-node-metrics-cert\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.744000 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jcp\" (UniqueName: \"kubernetes.io/projected/c3e10385-1e9b-4b11-8370-c39b2406e50b-kube-api-access-h5jcp\") pod \"ovnkube-node-75c2m\" (UID: \"c3e10385-1e9b-4b11-8370-c39b2406e50b\") " pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.774962 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.780772 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-txkm2"] Nov 21 13:43:30 crc kubenswrapper[4904]: I1121 13:43:30.785066 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-txkm2"] Nov 21 13:43:31 crc kubenswrapper[4904]: I1121 13:43:31.407884 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/2.log" Nov 21 13:43:31 crc kubenswrapper[4904]: I1121 13:43:31.410018 4904 generic.go:334] "Generic (PLEG): container finished" podID="c3e10385-1e9b-4b11-8370-c39b2406e50b" containerID="e65e912ca643fc9d9e1ac26769d057b308174aab0d5fc8856efdf779817262de" exitCode=0 Nov 21 13:43:31 crc kubenswrapper[4904]: I1121 13:43:31.410105 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerDied","Data":"e65e912ca643fc9d9e1ac26769d057b308174aab0d5fc8856efdf779817262de"} Nov 21 13:43:31 crc kubenswrapper[4904]: I1121 13:43:31.410242 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"108f98ae0f7be1ce81e56a88e0db60cb4516b57ce2482263be4ce0fc5ca36486"} Nov 21 13:43:32 crc kubenswrapper[4904]: I1121 13:43:32.426528 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"57c8eeff47b6efc4812b5bd88574f2504bf27f7c9aae4e0d78c8568ac286e845"} Nov 21 13:43:32 crc kubenswrapper[4904]: I1121 13:43:32.427071 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"fafda1a52fd461581f8c249d3872a7cab45e123a9f77c7529a7fd0d15a31879e"} Nov 21 13:43:32 crc kubenswrapper[4904]: I1121 13:43:32.427088 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"51f3d2566c2c238d963e10d08cdb83a7c7ba0df536a68be12372894f5da22943"} Nov 21 13:43:32 crc kubenswrapper[4904]: I1121 13:43:32.427101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"ec9c51de90de18bd4d58887e8cba2cfeeabda09c8e90c96605989ea27780f47f"} Nov 21 13:43:32 crc kubenswrapper[4904]: I1121 13:43:32.427111 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"e9115f13f5f9e3eed2f74d5a1255087da01421a7605d048b12e5913e447fe685"} Nov 21 13:43:32 crc kubenswrapper[4904]: I1121 13:43:32.523423 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349c3b8f-5311-4171-ade5-ce7db3d118ad" path="/var/lib/kubelet/pods/349c3b8f-5311-4171-ade5-ce7db3d118ad/volumes" Nov 21 13:43:33 crc kubenswrapper[4904]: I1121 13:43:33.434986 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"dfe46e49e6eae4f4887b1887be7a4a506af113477dc2f35be4e3f908dea81abb"} Nov 21 13:43:35 crc kubenswrapper[4904]: I1121 13:43:35.447111 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"a6ff01f4c717b296d637205b0b0f5d622b6f4766f0973913e071a9a4ebbcf077"} Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.460448 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" event={"ID":"c3e10385-1e9b-4b11-8370-c39b2406e50b","Type":"ContainerStarted","Data":"5d3f49cb378345ad2e721a37aa798fe339b6a8d1c5322cae3b97cd66f5ee3047"} Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.461334 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.461368 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.461441 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.501243 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.501622 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:43:37 crc kubenswrapper[4904]: I1121 13:43:37.511724 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" podStartSLOduration=7.511707628 podStartE2EDuration="7.511707628s" podCreationTimestamp="2025-11-21 13:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:43:37.507516194 +0000 UTC m=+691.629048776" watchObservedRunningTime="2025-11-21 13:43:37.511707628 +0000 UTC m=+691.633240180" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.126075 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-pqc2t"] Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.126967 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.127563 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.139629 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7"] Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.139864 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.140353 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.145903 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn"] Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.146113 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.146757 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.171721 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p"] Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.171853 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.172338 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.192810 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-l4jvh"] Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.192947 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:38 crc kubenswrapper[4904]: I1121 13:43:38.193374 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.203754 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(f120a97021e4fd5e8822ae7b7644eb5ef34aa604e97ef3c68ae88a2126691a57): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.203818 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(f120a97021e4fd5e8822ae7b7644eb5ef34aa604e97ef3c68ae88a2126691a57): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.203843 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(f120a97021e4fd5e8822ae7b7644eb5ef34aa604e97ef3c68ae88a2126691a57): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.203890 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-pqc2t_openshift-operators(2d39b52b-adba-414a-ba10-66181feecef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-pqc2t_openshift-operators(2d39b52b-adba-414a-ba10-66181feecef9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(f120a97021e4fd5e8822ae7b7644eb5ef34aa604e97ef3c68ae88a2126691a57): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" podUID="2d39b52b-adba-414a-ba10-66181feecef9" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.212095 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(e13f8dfc047a2ffd2437189740725761fff0ed949bf9035acc8e25db27e8f986): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.212150 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(e13f8dfc047a2ffd2437189740725761fff0ed949bf9035acc8e25db27e8f986): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.212181 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(e13f8dfc047a2ffd2437189740725761fff0ed949bf9035acc8e25db27e8f986): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.212229 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators(c4f53eb1-20cb-4196-89e2-197cecdacc6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators(c4f53eb1-20cb-4196-89e2-197cecdacc6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(e13f8dfc047a2ffd2437189740725761fff0ed949bf9035acc8e25db27e8f986): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" podUID="c4f53eb1-20cb-4196-89e2-197cecdacc6c" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.233873 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(0ffba2b5f5faf813f5b0d421a10b143464a85b565ff7c8fdf988991acb10ede0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.233965 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(0ffba2b5f5faf813f5b0d421a10b143464a85b565ff7c8fdf988991acb10ede0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.233994 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(0ffba2b5f5faf813f5b0d421a10b143464a85b565ff7c8fdf988991acb10ede0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.234061 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators(3d488c9a-40e8-4cce-a260-c4610af92de8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators(3d488c9a-40e8-4cce-a260-c4610af92de8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(0ffba2b5f5faf813f5b0d421a10b143464a85b565ff7c8fdf988991acb10ede0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" podUID="3d488c9a-40e8-4cce-a260-c4610af92de8" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.241226 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(a6ccfcf6877d66845cb8636c108e79bd07c422d069b668e377364856e43cb218): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.241322 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(a6ccfcf6877d66845cb8636c108e79bd07c422d069b668e377364856e43cb218): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.241349 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(a6ccfcf6877d66845cb8636c108e79bd07c422d069b668e377364856e43cb218): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.241408 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators(7f97ec95-046c-4a0e-9ebf-3baf5fed1053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators(7f97ec95-046c-4a0e-9ebf-3baf5fed1053)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(a6ccfcf6877d66845cb8636c108e79bd07c422d069b668e377364856e43cb218): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" podUID="7f97ec95-046c-4a0e-9ebf-3baf5fed1053" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.249793 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(81c6c82c4b9b732f8f90be35ca7c24e1ef142c5d8dfa6b8b4b99b3b9c3879cd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.249879 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(81c6c82c4b9b732f8f90be35ca7c24e1ef142c5d8dfa6b8b4b99b3b9c3879cd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.249925 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(81c6c82c4b9b732f8f90be35ca7c24e1ef142c5d8dfa6b8b4b99b3b9c3879cd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:38 crc kubenswrapper[4904]: E1121 13:43:38.249978 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-l4jvh_openshift-operators(ebdd832d-d268-472a-b067-d61a6c520b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-l4jvh_openshift-operators(ebdd832d-d268-472a-b067-d61a6c520b7f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(81c6c82c4b9b732f8f90be35ca7c24e1ef142c5d8dfa6b8b4b99b3b9c3879cd4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" podUID="ebdd832d-d268-472a-b067-d61a6c520b7f" Nov 21 13:43:43 crc kubenswrapper[4904]: I1121 13:43:43.512974 4904 scope.go:117] "RemoveContainer" containerID="e7bcb85c4309dddf373567192fb1362a6c42fb2260c7c8101594a97cf2d7016d" Nov 21 13:43:43 crc kubenswrapper[4904]: E1121 13:43:43.513916 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-kgngm_openshift-multus(190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a)\"" pod="openshift-multus/multus-kgngm" podUID="190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a" Nov 21 13:43:51 crc kubenswrapper[4904]: I1121 13:43:51.513199 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:51 crc kubenswrapper[4904]: I1121 13:43:51.513591 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:51 crc kubenswrapper[4904]: I1121 13:43:51.514169 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:51 crc kubenswrapper[4904]: I1121 13:43:51.514418 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.566266 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(7e2bb82a32003226bb307ef25f195f881625b8623f85b06cad57eb45ad596c4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.566362 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(7e2bb82a32003226bb307ef25f195f881625b8623f85b06cad57eb45ad596c4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.566388 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(7e2bb82a32003226bb307ef25f195f881625b8623f85b06cad57eb45ad596c4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.566486 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators(3d488c9a-40e8-4cce-a260-c4610af92de8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators(3d488c9a-40e8-4cce-a260-c4610af92de8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_openshift-operators_3d488c9a-40e8-4cce-a260-c4610af92de8_0(7e2bb82a32003226bb307ef25f195f881625b8623f85b06cad57eb45ad596c4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" podUID="3d488c9a-40e8-4cce-a260-c4610af92de8" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.572164 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(086180be96108dfddad88df67298740655367842ec562c1bb3ca7faf49d7add9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.572235 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(086180be96108dfddad88df67298740655367842ec562c1bb3ca7faf49d7add9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.572262 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(086180be96108dfddad88df67298740655367842ec562c1bb3ca7faf49d7add9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:43:51 crc kubenswrapper[4904]: E1121 13:43:51.572327 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators(7f97ec95-046c-4a0e-9ebf-3baf5fed1053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators(7f97ec95-046c-4a0e-9ebf-3baf5fed1053)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_openshift-operators_7f97ec95-046c-4a0e-9ebf-3baf5fed1053_0(086180be96108dfddad88df67298740655367842ec562c1bb3ca7faf49d7add9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" podUID="7f97ec95-046c-4a0e-9ebf-3baf5fed1053" Nov 21 13:43:52 crc kubenswrapper[4904]: I1121 13:43:52.512824 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:52 crc kubenswrapper[4904]: I1121 13:43:52.512943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:52 crc kubenswrapper[4904]: I1121 13:43:52.513495 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:52 crc kubenswrapper[4904]: I1121 13:43:52.513529 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.547758 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(63f29722d9807a448be1971ab3c80d3d461706832c32909eceab07040d819ae3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.547821 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(63f29722d9807a448be1971ab3c80d3d461706832c32909eceab07040d819ae3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.547846 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(63f29722d9807a448be1971ab3c80d3d461706832c32909eceab07040d819ae3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.547904 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-l4jvh_openshift-operators(ebdd832d-d268-472a-b067-d61a6c520b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-l4jvh_openshift-operators(ebdd832d-d268-472a-b067-d61a6c520b7f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-l4jvh_openshift-operators_ebdd832d-d268-472a-b067-d61a6c520b7f_0(63f29722d9807a448be1971ab3c80d3d461706832c32909eceab07040d819ae3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" podUID="ebdd832d-d268-472a-b067-d61a6c520b7f" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.553347 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(fbbec6203d2115c1f48d9cefc0921709a79da9304eb350d358328567c72f5fde): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.553433 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(fbbec6203d2115c1f48d9cefc0921709a79da9304eb350d358328567c72f5fde): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.553528 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(fbbec6203d2115c1f48d9cefc0921709a79da9304eb350d358328567c72f5fde): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:43:52 crc kubenswrapper[4904]: E1121 13:43:52.553580 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-pqc2t_openshift-operators(2d39b52b-adba-414a-ba10-66181feecef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-pqc2t_openshift-operators(2d39b52b-adba-414a-ba10-66181feecef9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-pqc2t_openshift-operators_2d39b52b-adba-414a-ba10-66181feecef9_0(fbbec6203d2115c1f48d9cefc0921709a79da9304eb350d358328567c72f5fde): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" podUID="2d39b52b-adba-414a-ba10-66181feecef9" Nov 21 13:43:53 crc kubenswrapper[4904]: I1121 13:43:53.512120 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:53 crc kubenswrapper[4904]: I1121 13:43:53.513428 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:53 crc kubenswrapper[4904]: E1121 13:43:53.536555 4904 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(46b678e225b4a14dc9a64ebf0f655a5e10dcf5b619e460f5a5c4b2c9af2c4ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 21 13:43:53 crc kubenswrapper[4904]: E1121 13:43:53.536642 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(46b678e225b4a14dc9a64ebf0f655a5e10dcf5b619e460f5a5c4b2c9af2c4ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:53 crc kubenswrapper[4904]: E1121 13:43:53.536693 4904 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(46b678e225b4a14dc9a64ebf0f655a5e10dcf5b619e460f5a5c4b2c9af2c4ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:43:53 crc kubenswrapper[4904]: E1121 13:43:53.536749 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators(c4f53eb1-20cb-4196-89e2-197cecdacc6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators(c4f53eb1-20cb-4196-89e2-197cecdacc6c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-gg6qn_openshift-operators_c4f53eb1-20cb-4196-89e2-197cecdacc6c_0(46b678e225b4a14dc9a64ebf0f655a5e10dcf5b619e460f5a5c4b2c9af2c4ed1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" podUID="c4f53eb1-20cb-4196-89e2-197cecdacc6c" Nov 21 13:43:56 crc kubenswrapper[4904]: I1121 13:43:56.515463 4904 scope.go:117] "RemoveContainer" containerID="e7bcb85c4309dddf373567192fb1362a6c42fb2260c7c8101594a97cf2d7016d" Nov 21 13:43:57 crc kubenswrapper[4904]: I1121 13:43:57.582925 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kgngm_190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a/kube-multus/2.log" Nov 21 13:43:57 crc kubenswrapper[4904]: I1121 13:43:57.583021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kgngm" event={"ID":"190a4a47-76b8-4bbc-95f3-0f9c9c12fb1a","Type":"ContainerStarted","Data":"952b03795f2beea663468020d3eef67596d0529d775dbaf94297b2187ff26863"} Nov 21 13:43:58 crc kubenswrapper[4904]: I1121 13:43:58.113292 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:43:58 crc kubenswrapper[4904]: I1121 13:43:58.113670 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:44:00 crc kubenswrapper[4904]: I1121 13:44:00.796831 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-75c2m" Nov 21 13:44:02 crc kubenswrapper[4904]: I1121 13:44:02.512810 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:44:02 crc kubenswrapper[4904]: I1121 13:44:02.513421 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" Nov 21 13:44:02 crc kubenswrapper[4904]: I1121 13:44:02.710531 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p"] Nov 21 13:44:02 crc kubenswrapper[4904]: W1121 13:44:02.718937 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d488c9a_40e8_4cce_a260_c4610af92de8.slice/crio-4091f9f07addeecc15a62ad748f7586cff3fadfafb76123afcf33ff19c953308 WatchSource:0}: Error finding container 4091f9f07addeecc15a62ad748f7586cff3fadfafb76123afcf33ff19c953308: Status 404 returned error can't find the container with id 4091f9f07addeecc15a62ad748f7586cff3fadfafb76123afcf33ff19c953308 Nov 21 13:44:03 crc kubenswrapper[4904]: I1121 13:44:03.619758 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" event={"ID":"3d488c9a-40e8-4cce-a260-c4610af92de8","Type":"ContainerStarted","Data":"4091f9f07addeecc15a62ad748f7586cff3fadfafb76123afcf33ff19c953308"} Nov 21 13:44:04 crc kubenswrapper[4904]: I1121 13:44:04.512071 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:44:04 crc kubenswrapper[4904]: I1121 13:44:04.512185 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:44:04 crc kubenswrapper[4904]: I1121 13:44:04.512945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" Nov 21 13:44:04 crc kubenswrapper[4904]: I1121 13:44:04.513015 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" Nov 21 13:44:04 crc kubenswrapper[4904]: I1121 13:44:04.752937 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn"] Nov 21 13:44:04 crc kubenswrapper[4904]: W1121 13:44:04.764991 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f53eb1_20cb_4196_89e2_197cecdacc6c.slice/crio-46a4f171bf71cc5075939bb00df108294eae3d7bbad812bbe300dac8179fc893 WatchSource:0}: Error finding container 46a4f171bf71cc5075939bb00df108294eae3d7bbad812bbe300dac8179fc893: Status 404 returned error can't find the container with id 46a4f171bf71cc5075939bb00df108294eae3d7bbad812bbe300dac8179fc893 Nov 21 13:44:04 crc kubenswrapper[4904]: I1121 13:44:04.794050 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7"] Nov 21 13:44:04 crc kubenswrapper[4904]: W1121 13:44:04.804252 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f97ec95_046c_4a0e_9ebf_3baf5fed1053.slice/crio-72b3c683d204bc45e323c6ef9755fb3be0032229ae80e3ba3cb79e4c73ffed92 WatchSource:0}: Error finding container 72b3c683d204bc45e323c6ef9755fb3be0032229ae80e3ba3cb79e4c73ffed92: Status 404 returned error can't find the container with id 72b3c683d204bc45e323c6ef9755fb3be0032229ae80e3ba3cb79e4c73ffed92 Nov 21 13:44:05 crc kubenswrapper[4904]: I1121 13:44:05.512133 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:44:05 crc kubenswrapper[4904]: I1121 13:44:05.512920 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:44:05 crc kubenswrapper[4904]: I1121 13:44:05.633647 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" event={"ID":"7f97ec95-046c-4a0e-9ebf-3baf5fed1053","Type":"ContainerStarted","Data":"72b3c683d204bc45e323c6ef9755fb3be0032229ae80e3ba3cb79e4c73ffed92"} Nov 21 13:44:05 crc kubenswrapper[4904]: I1121 13:44:05.634802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" event={"ID":"c4f53eb1-20cb-4196-89e2-197cecdacc6c","Type":"ContainerStarted","Data":"46a4f171bf71cc5075939bb00df108294eae3d7bbad812bbe300dac8179fc893"} Nov 21 13:44:05 crc kubenswrapper[4904]: I1121 13:44:05.932524 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-l4jvh"] Nov 21 13:44:06 crc kubenswrapper[4904]: I1121 13:44:06.518968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:44:06 crc kubenswrapper[4904]: I1121 13:44:06.520073 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:44:06 crc kubenswrapper[4904]: I1121 13:44:06.643031 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" event={"ID":"ebdd832d-d268-472a-b067-d61a6c520b7f","Type":"ContainerStarted","Data":"f0af49dbca4ee8ee83f92bd44da28ba8502433a92051fc822aea1a4e2a80a50b"} Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.007075 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-pqc2t"] Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.698340 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" event={"ID":"c4f53eb1-20cb-4196-89e2-197cecdacc6c","Type":"ContainerStarted","Data":"8977282ebc2d3a492a1877f7265547ad443d99773f695cc7289ce0e79ce84bcc"} Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.700103 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" event={"ID":"2d39b52b-adba-414a-ba10-66181feecef9","Type":"ContainerStarted","Data":"ad4c11108098cc0a78e32c1087f1016fd613190c5f58e26e0b1dfd819309e391"} Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.703010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" event={"ID":"3d488c9a-40e8-4cce-a260-c4610af92de8","Type":"ContainerStarted","Data":"bd5cc2c4e633d199d0dc15cc1226fca46a1a47dc70a8a929eb0978cc6cef4b21"} Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.705589 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" event={"ID":"7f97ec95-046c-4a0e-9ebf-3baf5fed1053","Type":"ContainerStarted","Data":"6424b7ce5d11e5f9c42817c88d78b961f5aa6f55fe0cef2d5eff9b5cc4969b1d"} Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.727565 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-gg6qn" podStartSLOduration=36.910374023 podStartE2EDuration="42.727532169s" podCreationTimestamp="2025-11-21 13:43:29 +0000 UTC" firstStartedPulling="2025-11-21 13:44:04.774294914 +0000 UTC m=+718.895827466" lastFinishedPulling="2025-11-21 13:44:10.59145306 +0000 UTC m=+724.712985612" observedRunningTime="2025-11-21 13:44:11.719993943 +0000 UTC m=+725.841526525" watchObservedRunningTime="2025-11-21 13:44:11.727532169 +0000 UTC m=+725.849064731" Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.763039 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7" podStartSLOduration=36.925871296 podStartE2EDuration="42.763018436s" podCreationTimestamp="2025-11-21 13:43:29 +0000 UTC" firstStartedPulling="2025-11-21 13:44:04.806788586 +0000 UTC m=+718.928321138" lastFinishedPulling="2025-11-21 13:44:10.643935726 +0000 UTC m=+724.765468278" observedRunningTime="2025-11-21 13:44:11.756042804 +0000 UTC m=+725.877575356" watchObservedRunningTime="2025-11-21 13:44:11.763018436 +0000 UTC m=+725.884550988" Nov 21 13:44:11 crc kubenswrapper[4904]: I1121 13:44:11.782042 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p" podStartSLOduration=34.921417141 podStartE2EDuration="42.782022395s" podCreationTimestamp="2025-11-21 13:43:29 +0000 UTC" firstStartedPulling="2025-11-21 13:44:02.720928621 +0000 UTC m=+716.842461173" lastFinishedPulling="2025-11-21 13:44:10.581533875 +0000 UTC m=+724.703066427" observedRunningTime="2025-11-21 13:44:11.781087883 +0000 UTC m=+725.902620435" watchObservedRunningTime="2025-11-21 13:44:11.782022395 +0000 UTC m=+725.903554947" Nov 21 13:44:14 crc kubenswrapper[4904]: I1121 13:44:14.724407 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" event={"ID":"ebdd832d-d268-472a-b067-d61a6c520b7f","Type":"ContainerStarted","Data":"0ceb2f0b99c1902788f8f09076ee1ac6c793b25463d2e931a427ac6d11bdf2e7"} Nov 21 13:44:14 crc kubenswrapper[4904]: I1121 13:44:14.724975 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:44:14 crc kubenswrapper[4904]: I1121 13:44:14.747301 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" podStartSLOduration=37.773401997 podStartE2EDuration="45.747283909s" podCreationTimestamp="2025-11-21 13:43:29 +0000 UTC" firstStartedPulling="2025-11-21 13:44:05.953680214 +0000 UTC m=+720.075212766" lastFinishedPulling="2025-11-21 13:44:13.927562126 +0000 UTC m=+728.049094678" observedRunningTime="2025-11-21 13:44:14.743529426 +0000 UTC m=+728.865061978" watchObservedRunningTime="2025-11-21 13:44:14.747283909 +0000 UTC m=+728.868816461" Nov 21 13:44:14 crc kubenswrapper[4904]: I1121 13:44:14.757082 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-l4jvh" Nov 21 13:44:15 crc kubenswrapper[4904]: I1121 13:44:15.735696 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" event={"ID":"2d39b52b-adba-414a-ba10-66181feecef9","Type":"ContainerStarted","Data":"e35ef3d78643dcecc7ace249e9787188e44698b89e23fe0a99154bf7a6aaeba2"} Nov 21 13:44:15 crc kubenswrapper[4904]: I1121 13:44:15.735853 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:44:15 crc kubenswrapper[4904]: I1121 13:44:15.763000 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" podStartSLOduration=42.650858726 podStartE2EDuration="46.762978105s" podCreationTimestamp="2025-11-21 13:43:29 +0000 UTC" firstStartedPulling="2025-11-21 13:44:11.032037046 +0000 UTC m=+725.153569598" lastFinishedPulling="2025-11-21 13:44:15.144156435 +0000 UTC m=+729.265688977" observedRunningTime="2025-11-21 13:44:15.758628046 +0000 UTC m=+729.880160598" watchObservedRunningTime="2025-11-21 13:44:15.762978105 +0000 UTC m=+729.884510667" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.594415 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-hz5x8"] Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.595453 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.597792 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.597792 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.598613 4904 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l52qq" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.628158 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-w2h7s"] Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.643843 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-w2h7s" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.643740 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-hz5x8"] Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.644495 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-q7hpz"] Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.646391 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.648786 4904 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-l54nj" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.649273 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-pqc2t" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.650436 4904 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-xqzsz" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.667832 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b9w\" (UniqueName: \"kubernetes.io/projected/c9229a7d-9559-43dd-8470-5e0377837fa3-kube-api-access-66b9w\") pod \"cert-manager-cainjector-7f985d654d-hz5x8\" (UID: \"c9229a7d-9559-43dd-8470-5e0377837fa3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.684945 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-w2h7s"] Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.686726 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-q7hpz"] Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.769429 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66b9w\" (UniqueName: \"kubernetes.io/projected/c9229a7d-9559-43dd-8470-5e0377837fa3-kube-api-access-66b9w\") pod \"cert-manager-cainjector-7f985d654d-hz5x8\" (UID: \"c9229a7d-9559-43dd-8470-5e0377837fa3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.769547 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjs2\" (UniqueName: \"kubernetes.io/projected/e0eaefd8-c20d-4081-baa3-df1a92c06136-kube-api-access-kwjs2\") pod \"cert-manager-5b446d88c5-w2h7s\" (UID: \"e0eaefd8-c20d-4081-baa3-df1a92c06136\") " pod="cert-manager/cert-manager-5b446d88c5-w2h7s" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.769587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s675v\" (UniqueName: \"kubernetes.io/projected/9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80-kube-api-access-s675v\") pod \"cert-manager-webhook-5655c58dd6-q7hpz\" (UID: \"9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.791848 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66b9w\" (UniqueName: \"kubernetes.io/projected/c9229a7d-9559-43dd-8470-5e0377837fa3-kube-api-access-66b9w\") pod \"cert-manager-cainjector-7f985d654d-hz5x8\" (UID: \"c9229a7d-9559-43dd-8470-5e0377837fa3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.870787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjs2\" (UniqueName: \"kubernetes.io/projected/e0eaefd8-c20d-4081-baa3-df1a92c06136-kube-api-access-kwjs2\") pod \"cert-manager-5b446d88c5-w2h7s\" (UID: \"e0eaefd8-c20d-4081-baa3-df1a92c06136\") " pod="cert-manager/cert-manager-5b446d88c5-w2h7s" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.870845 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s675v\" (UniqueName: \"kubernetes.io/projected/9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80-kube-api-access-s675v\") pod \"cert-manager-webhook-5655c58dd6-q7hpz\" (UID: \"9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.890821 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s675v\" (UniqueName: \"kubernetes.io/projected/9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80-kube-api-access-s675v\") pod \"cert-manager-webhook-5655c58dd6-q7hpz\" (UID: \"9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.891091 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjs2\" (UniqueName: \"kubernetes.io/projected/e0eaefd8-c20d-4081-baa3-df1a92c06136-kube-api-access-kwjs2\") pod \"cert-manager-5b446d88c5-w2h7s\" (UID: \"e0eaefd8-c20d-4081-baa3-df1a92c06136\") " pod="cert-manager/cert-manager-5b446d88c5-w2h7s" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.914882 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.980513 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-w2h7s" Nov 21 13:44:20 crc kubenswrapper[4904]: I1121 13:44:20.997250 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:21 crc kubenswrapper[4904]: I1121 13:44:21.134697 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-hz5x8"] Nov 21 13:44:21 crc kubenswrapper[4904]: I1121 13:44:21.238318 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-w2h7s"] Nov 21 13:44:21 crc kubenswrapper[4904]: I1121 13:44:21.292056 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-q7hpz"] Nov 21 13:44:21 crc kubenswrapper[4904]: W1121 13:44:21.297834 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fa4f8f4_6159_45d3_886a_c8bfa7cd6b80.slice/crio-66f755b453a325b62439dd5405d99a9c4da261321d7f6344a1bd0ff9ebb918b4 WatchSource:0}: Error finding container 66f755b453a325b62439dd5405d99a9c4da261321d7f6344a1bd0ff9ebb918b4: Status 404 returned error can't find the container with id 66f755b453a325b62439dd5405d99a9c4da261321d7f6344a1bd0ff9ebb918b4 Nov 21 13:44:21 crc kubenswrapper[4904]: I1121 13:44:21.771642 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" event={"ID":"c9229a7d-9559-43dd-8470-5e0377837fa3","Type":"ContainerStarted","Data":"18834fbd401b5e4d430238fb4a7ff3716cd632a180e11119747b7c4322d7458a"} Nov 21 13:44:21 crc kubenswrapper[4904]: I1121 13:44:21.773493 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" event={"ID":"9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80","Type":"ContainerStarted","Data":"66f755b453a325b62439dd5405d99a9c4da261321d7f6344a1bd0ff9ebb918b4"} Nov 21 13:44:21 crc kubenswrapper[4904]: I1121 13:44:21.774483 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-w2h7s" event={"ID":"e0eaefd8-c20d-4081-baa3-df1a92c06136","Type":"ContainerStarted","Data":"cabcc894270fd849416dd5b4a6a8e2232e9ecf30c88cb60aea395f638eb3e105"} Nov 21 13:44:25 crc kubenswrapper[4904]: I1121 13:44:25.837483 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" event={"ID":"c9229a7d-9559-43dd-8470-5e0377837fa3","Type":"ContainerStarted","Data":"3ba1b4c0aa18cbf0fd7006d9d2af071c7f117689af8b24d4e758f6b251e33d3a"} Nov 21 13:44:25 crc kubenswrapper[4904]: I1121 13:44:25.860043 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-hz5x8" podStartSLOduration=2.296860108 podStartE2EDuration="5.860014334s" podCreationTimestamp="2025-11-21 13:44:20 +0000 UTC" firstStartedPulling="2025-11-21 13:44:21.152363921 +0000 UTC m=+735.273896473" lastFinishedPulling="2025-11-21 13:44:24.715518147 +0000 UTC m=+738.837050699" observedRunningTime="2025-11-21 13:44:25.855599295 +0000 UTC m=+739.977131847" watchObservedRunningTime="2025-11-21 13:44:25.860014334 +0000 UTC m=+739.981546886" Nov 21 13:44:26 crc kubenswrapper[4904]: I1121 13:44:26.845812 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" event={"ID":"9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80","Type":"ContainerStarted","Data":"b2b665e13b95c2df5e65b63b9f4bd5e5daa4466b79c84626b349c6763dbcd204"} Nov 21 13:44:26 crc kubenswrapper[4904]: I1121 13:44:26.847228 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:26 crc kubenswrapper[4904]: I1121 13:44:26.849816 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-w2h7s" event={"ID":"e0eaefd8-c20d-4081-baa3-df1a92c06136","Type":"ContainerStarted","Data":"8ed71fd35d61653d73913c783c141fc56e63c80807540e03c30dab8c7143c1a7"} Nov 21 13:44:26 crc kubenswrapper[4904]: I1121 13:44:26.861452 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" podStartSLOduration=1.625024229 podStartE2EDuration="6.861433736s" podCreationTimestamp="2025-11-21 13:44:20 +0000 UTC" firstStartedPulling="2025-11-21 13:44:21.299306642 +0000 UTC m=+735.420839184" lastFinishedPulling="2025-11-21 13:44:26.535716139 +0000 UTC m=+740.657248691" observedRunningTime="2025-11-21 13:44:26.8607465 +0000 UTC m=+740.982279052" watchObservedRunningTime="2025-11-21 13:44:26.861433736 +0000 UTC m=+740.982966288" Nov 21 13:44:26 crc kubenswrapper[4904]: I1121 13:44:26.881705 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-w2h7s" podStartSLOduration=1.587882341 podStartE2EDuration="6.881685527s" podCreationTimestamp="2025-11-21 13:44:20 +0000 UTC" firstStartedPulling="2025-11-21 13:44:21.243536473 +0000 UTC m=+735.365069025" lastFinishedPulling="2025-11-21 13:44:26.537339659 +0000 UTC m=+740.658872211" observedRunningTime="2025-11-21 13:44:26.879995385 +0000 UTC m=+741.001527937" watchObservedRunningTime="2025-11-21 13:44:26.881685527 +0000 UTC m=+741.003218079" Nov 21 13:44:28 crc kubenswrapper[4904]: I1121 13:44:28.113697 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:44:28 crc kubenswrapper[4904]: I1121 13:44:28.114065 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:44:36 crc kubenswrapper[4904]: I1121 13:44:36.000861 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-q7hpz" Nov 21 13:44:41 crc kubenswrapper[4904]: I1121 13:44:41.780561 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wkqr4"] Nov 21 13:44:41 crc kubenswrapper[4904]: I1121 13:44:41.781178 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" podUID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" containerName="controller-manager" containerID="cri-o://ff3c84f4a4346f02575943f8a97211163143b3bcc9972b205d32f95aa98a4b92" gracePeriod=30 Nov 21 13:44:41 crc kubenswrapper[4904]: I1121 13:44:41.900401 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v"] Nov 21 13:44:41 crc kubenswrapper[4904]: I1121 13:44:41.900635 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" podUID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" containerName="route-controller-manager" containerID="cri-o://bdfaa2b070c75d4c025363e3a6cd51430d4d6c7fdf353b5d4daf387c810d99f5" gracePeriod=30 Nov 21 13:44:43 crc kubenswrapper[4904]: I1121 13:44:43.951267 4904 generic.go:334] "Generic (PLEG): container finished" podID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" containerID="ff3c84f4a4346f02575943f8a97211163143b3bcc9972b205d32f95aa98a4b92" exitCode=0 Nov 21 13:44:43 crc kubenswrapper[4904]: I1121 13:44:43.951344 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" event={"ID":"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5","Type":"ContainerDied","Data":"ff3c84f4a4346f02575943f8a97211163143b3bcc9972b205d32f95aa98a4b92"} Nov 21 13:44:43 crc kubenswrapper[4904]: I1121 13:44:43.953940 4904 generic.go:334] "Generic (PLEG): container finished" podID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" containerID="bdfaa2b070c75d4c025363e3a6cd51430d4d6c7fdf353b5d4daf387c810d99f5" exitCode=0 Nov 21 13:44:43 crc kubenswrapper[4904]: I1121 13:44:43.953992 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" event={"ID":"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90","Type":"ContainerDied","Data":"bdfaa2b070c75d4c025363e3a6cd51430d4d6c7fdf353b5d4daf387c810d99f5"} Nov 21 13:44:44 crc kubenswrapper[4904]: I1121 13:44:44.938087 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.001212 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fd96f8954-g9jwj"] Nov 21 13:44:45 crc kubenswrapper[4904]: E1121 13:44:45.001492 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" containerName="controller-manager" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.001511 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" containerName="controller-manager" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.001641 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" containerName="controller-manager" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.002173 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.005491 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" event={"ID":"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5","Type":"ContainerDied","Data":"76a1cfc6268e02d4289d91dc0e5edf40fc7376a47b5b5e0330345b168972e5a6"} Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.005529 4904 scope.go:117] "RemoveContainer" containerID="ff3c84f4a4346f02575943f8a97211163143b3bcc9972b205d32f95aa98a4b92" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.005612 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wkqr4" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.024875 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fd96f8954-g9jwj"] Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.030196 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.035927 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-client-ca\") pod \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.035989 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-serving-cert\") pod \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.036032 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-proxy-ca-bundles\") pod \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.036103 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-config\") pod \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.036215 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vssml\" (UniqueName: \"kubernetes.io/projected/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-kube-api-access-vssml\") pod \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\" (UID: \"a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.037352 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-client-ca" (OuterVolumeSpecName: "client-ca") pod "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" (UID: "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.037366 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" (UID: "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.037888 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-config" (OuterVolumeSpecName: "config") pod "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" (UID: "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.047900 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-kube-api-access-vssml" (OuterVolumeSpecName: "kube-api-access-vssml") pod "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" (UID: "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5"). InnerVolumeSpecName "kube-api-access-vssml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.048805 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" (UID: "a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.137789 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9rbv\" (UniqueName: \"kubernetes.io/projected/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-kube-api-access-k9rbv\") pod \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138140 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-serving-cert\") pod \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138284 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-config\") pod \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138409 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-client-ca\") pod \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\" (UID: \"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90\") " Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138645 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-proxy-ca-bundles\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138834 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-client-ca\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138946 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-config\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138964 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" (UID: "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.138972 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-config" (OuterVolumeSpecName: "config") pod "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" (UID: "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139119 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbkv\" (UniqueName: \"kubernetes.io/projected/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-kube-api-access-jkbkv\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139312 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-serving-cert\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139371 4904 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139388 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139400 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139409 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vssml\" (UniqueName: \"kubernetes.io/projected/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-kube-api-access-vssml\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139420 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139429 4904 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-client-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.139437 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.142085 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" (UID: "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.142106 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-kube-api-access-k9rbv" (OuterVolumeSpecName: "kube-api-access-k9rbv") pod "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" (UID: "a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90"). InnerVolumeSpecName "kube-api-access-k9rbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.241166 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbkv\" (UniqueName: \"kubernetes.io/projected/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-kube-api-access-jkbkv\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.241428 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-serving-cert\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.241515 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-proxy-ca-bundles\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.241580 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-client-ca\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.242082 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-config\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.243092 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-proxy-ca-bundles\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.243211 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-client-ca\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.243499 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-config\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.245701 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-serving-cert\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.253159 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9rbv\" (UniqueName: \"kubernetes.io/projected/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-kube-api-access-k9rbv\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.253446 4904 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.262818 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbkv\" (UniqueName: \"kubernetes.io/projected/f1a01f28-2ba8-4044-b4a1-c7049c8d9680-kube-api-access-jkbkv\") pod \"controller-manager-6fd96f8954-g9jwj\" (UID: \"f1a01f28-2ba8-4044-b4a1-c7049c8d9680\") " pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.330753 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wkqr4"] Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.334315 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wkqr4"] Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.346675 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:45 crc kubenswrapper[4904]: I1121 13:44:45.539855 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fd96f8954-g9jwj"] Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.013388 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" event={"ID":"f1a01f28-2ba8-4044-b4a1-c7049c8d9680","Type":"ContainerStarted","Data":"05a846180ec8724f7bada28f9727f058d7c5bb64a0c69900abc89a0b7fe61964"} Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.013916 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" event={"ID":"f1a01f28-2ba8-4044-b4a1-c7049c8d9680","Type":"ContainerStarted","Data":"c2456783e6189ce8f756d9a929d06f5bb557bf57142195c8d439f42012128206"} Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.013967 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.015241 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" event={"ID":"a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90","Type":"ContainerDied","Data":"f7b119180bd4e2dfff34d9af12cce20617c32f02d06fb374a60ad7a8d65898aa"} Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.015293 4904 scope.go:117] "RemoveContainer" containerID="bdfaa2b070c75d4c025363e3a6cd51430d4d6c7fdf353b5d4daf387c810d99f5" Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.015305 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v" Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.019745 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.036884 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fd96f8954-g9jwj" podStartSLOduration=5.036867811 podStartE2EDuration="5.036867811s" podCreationTimestamp="2025-11-21 13:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:44:46.035863136 +0000 UTC m=+760.157395688" watchObservedRunningTime="2025-11-21 13:44:46.036867811 +0000 UTC m=+760.158400373" Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.051003 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v"] Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.054839 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gqj7v"] Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.520438 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" path="/var/lib/kubelet/pods/a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90/volumes" Nov 21 13:44:46 crc kubenswrapper[4904]: I1121 13:44:46.520991 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5" path="/var/lib/kubelet/pods/a6456a0e-a019-4fe5-86ae-0f3ef7cdcdf5/volumes" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.361992 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr"] Nov 21 13:44:47 crc kubenswrapper[4904]: E1121 13:44:47.362216 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" containerName="route-controller-manager" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.362231 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" containerName="route-controller-manager" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.362348 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b26f1c-50df-4dee-b0e2-1c7b83f1ef90" containerName="route-controller-manager" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.362810 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.365930 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.366045 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.366301 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.366319 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.366521 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.366730 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.375518 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr"] Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.378787 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/247cca2e-5203-48f3-872e-41f5367bedab-config\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.378830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x48b\" (UniqueName: \"kubernetes.io/projected/247cca2e-5203-48f3-872e-41f5367bedab-kube-api-access-8x48b\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.378888 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/247cca2e-5203-48f3-872e-41f5367bedab-client-ca\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.378931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/247cca2e-5203-48f3-872e-41f5367bedab-serving-cert\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.480700 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/247cca2e-5203-48f3-872e-41f5367bedab-client-ca\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.480779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/247cca2e-5203-48f3-872e-41f5367bedab-serving-cert\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.480826 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/247cca2e-5203-48f3-872e-41f5367bedab-config\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.480842 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x48b\" (UniqueName: \"kubernetes.io/projected/247cca2e-5203-48f3-872e-41f5367bedab-kube-api-access-8x48b\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.482172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/247cca2e-5203-48f3-872e-41f5367bedab-client-ca\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.483761 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/247cca2e-5203-48f3-872e-41f5367bedab-config\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.490221 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/247cca2e-5203-48f3-872e-41f5367bedab-serving-cert\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.509458 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x48b\" (UniqueName: \"kubernetes.io/projected/247cca2e-5203-48f3-872e-41f5367bedab-kube-api-access-8x48b\") pod \"route-controller-manager-5d9868789c-kq4lr\" (UID: \"247cca2e-5203-48f3-872e-41f5367bedab\") " pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:47 crc kubenswrapper[4904]: I1121 13:44:47.685731 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:48 crc kubenswrapper[4904]: I1121 13:44:48.095535 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr"] Nov 21 13:44:49 crc kubenswrapper[4904]: I1121 13:44:49.036195 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" event={"ID":"247cca2e-5203-48f3-872e-41f5367bedab","Type":"ContainerStarted","Data":"7642a3dde5a3eee3d7ed60a2eba1c0923fda2454b3c2441af6e87dd36b4922bb"} Nov 21 13:44:49 crc kubenswrapper[4904]: I1121 13:44:49.036263 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" event={"ID":"247cca2e-5203-48f3-872e-41f5367bedab","Type":"ContainerStarted","Data":"7249770cd0b797a4e43ea68ca82b164be1d391ce1e0f9eba9d1a88dbabed9536"} Nov 21 13:44:49 crc kubenswrapper[4904]: I1121 13:44:49.036676 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:49 crc kubenswrapper[4904]: I1121 13:44:49.042859 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" Nov 21 13:44:49 crc kubenswrapper[4904]: I1121 13:44:49.056633 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d9868789c-kq4lr" podStartSLOduration=8.05661166 podStartE2EDuration="8.05661166s" podCreationTimestamp="2025-11-21 13:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:44:49.05416269 +0000 UTC m=+763.175695242" watchObservedRunningTime="2025-11-21 13:44:49.05661166 +0000 UTC m=+763.178144212" Nov 21 13:44:53 crc kubenswrapper[4904]: I1121 13:44:53.057632 4904 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.664719 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zdchf"] Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.668272 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.708221 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdchf"] Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.725838 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-utilities\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.725898 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-catalog-content\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.725980 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95c96\" (UniqueName: \"kubernetes.io/projected/316f3835-c264-4fa4-be95-aaeb15aeeb7a-kube-api-access-95c96\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.827134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-utilities\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.827202 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-catalog-content\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.827264 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95c96\" (UniqueName: \"kubernetes.io/projected/316f3835-c264-4fa4-be95-aaeb15aeeb7a-kube-api-access-95c96\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.827744 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-utilities\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.828146 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-catalog-content\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:57 crc kubenswrapper[4904]: I1121 13:44:57.853729 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95c96\" (UniqueName: \"kubernetes.io/projected/316f3835-c264-4fa4-be95-aaeb15aeeb7a-kube-api-access-95c96\") pod \"redhat-marketplace-zdchf\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.007809 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.114168 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.114244 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.114291 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.115131 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34e69668c2ea09c1fa2e0c0fbb6545e8671e2e8710d41559fb5aa9d7200b9106"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.115192 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://34e69668c2ea09c1fa2e0c0fbb6545e8671e2e8710d41559fb5aa9d7200b9106" gracePeriod=600 Nov 21 13:44:58 crc kubenswrapper[4904]: I1121 13:44:58.412990 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdchf"] Nov 21 13:44:58 crc kubenswrapper[4904]: W1121 13:44:58.426038 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod316f3835_c264_4fa4_be95_aaeb15aeeb7a.slice/crio-a2ff318234a2740bca71343ac1fe81eb653f0e85522368e05bf090008ec1e3a4 WatchSource:0}: Error finding container a2ff318234a2740bca71343ac1fe81eb653f0e85522368e05bf090008ec1e3a4: Status 404 returned error can't find the container with id a2ff318234a2740bca71343ac1fe81eb653f0e85522368e05bf090008ec1e3a4 Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.095138 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="34e69668c2ea09c1fa2e0c0fbb6545e8671e2e8710d41559fb5aa9d7200b9106" exitCode=0 Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.095221 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"34e69668c2ea09c1fa2e0c0fbb6545e8671e2e8710d41559fb5aa9d7200b9106"} Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.095264 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"be6a3c0a99c9505797540ba9588fe9f6a753a8471c941586b86762324c656b9e"} Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.095283 4904 scope.go:117] "RemoveContainer" containerID="98139dd02c3ca616ca203db2c2722354a90ab7ddd2cf4f5a2d5c1eb3d2693d7a" Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.098863 4904 generic.go:334] "Generic (PLEG): container finished" podID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerID="f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6" exitCode=0 Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.098903 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdchf" event={"ID":"316f3835-c264-4fa4-be95-aaeb15aeeb7a","Type":"ContainerDied","Data":"f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6"} Nov 21 13:44:59 crc kubenswrapper[4904]: I1121 13:44:59.098933 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdchf" event={"ID":"316f3835-c264-4fa4-be95-aaeb15aeeb7a","Type":"ContainerStarted","Data":"a2ff318234a2740bca71343ac1fe81eb653f0e85522368e05bf090008ec1e3a4"} Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.111940 4904 generic.go:334] "Generic (PLEG): container finished" podID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerID="2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f" exitCode=0 Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.112021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdchf" event={"ID":"316f3835-c264-4fa4-be95-aaeb15aeeb7a","Type":"ContainerDied","Data":"2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f"} Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.146056 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn"] Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.146835 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.149274 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.149307 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.158580 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn"] Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.256758 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713735aa-1fd7-4ee5-8106-9be266844186-secret-volume\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.256866 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713735aa-1fd7-4ee5-8106-9be266844186-config-volume\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.257054 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cgrw\" (UniqueName: \"kubernetes.io/projected/713735aa-1fd7-4ee5-8106-9be266844186-kube-api-access-5cgrw\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.358163 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713735aa-1fd7-4ee5-8106-9be266844186-config-volume\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.358215 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cgrw\" (UniqueName: \"kubernetes.io/projected/713735aa-1fd7-4ee5-8106-9be266844186-kube-api-access-5cgrw\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.358250 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713735aa-1fd7-4ee5-8106-9be266844186-secret-volume\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.359177 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713735aa-1fd7-4ee5-8106-9be266844186-config-volume\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.364586 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713735aa-1fd7-4ee5-8106-9be266844186-secret-volume\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.375510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cgrw\" (UniqueName: \"kubernetes.io/projected/713735aa-1fd7-4ee5-8106-9be266844186-kube-api-access-5cgrw\") pod \"collect-profiles-29395545-dkqzn\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.513473 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:00 crc kubenswrapper[4904]: I1121 13:45:00.927377 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn"] Nov 21 13:45:00 crc kubenswrapper[4904]: W1121 13:45:00.932442 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod713735aa_1fd7_4ee5_8106_9be266844186.slice/crio-d73072e394376c006b48ac38a23d92db0b36dc4deddfdb2d061155534e09772a WatchSource:0}: Error finding container d73072e394376c006b48ac38a23d92db0b36dc4deddfdb2d061155534e09772a: Status 404 returned error can't find the container with id d73072e394376c006b48ac38a23d92db0b36dc4deddfdb2d061155534e09772a Nov 21 13:45:01 crc kubenswrapper[4904]: I1121 13:45:01.130256 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" event={"ID":"713735aa-1fd7-4ee5-8106-9be266844186","Type":"ContainerStarted","Data":"5acbc35650d7bdf78a2e0a7d659d083b409421928c437f46f034205818038033"} Nov 21 13:45:01 crc kubenswrapper[4904]: I1121 13:45:01.130586 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" event={"ID":"713735aa-1fd7-4ee5-8106-9be266844186","Type":"ContainerStarted","Data":"d73072e394376c006b48ac38a23d92db0b36dc4deddfdb2d061155534e09772a"} Nov 21 13:45:01 crc kubenswrapper[4904]: I1121 13:45:01.133168 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdchf" event={"ID":"316f3835-c264-4fa4-be95-aaeb15aeeb7a","Type":"ContainerStarted","Data":"263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c"} Nov 21 13:45:01 crc kubenswrapper[4904]: I1121 13:45:01.174975 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zdchf" podStartSLOduration=2.527197019 podStartE2EDuration="4.174954701s" podCreationTimestamp="2025-11-21 13:44:57 +0000 UTC" firstStartedPulling="2025-11-21 13:44:59.100681982 +0000 UTC m=+773.222214534" lastFinishedPulling="2025-11-21 13:45:00.748439664 +0000 UTC m=+774.869972216" observedRunningTime="2025-11-21 13:45:01.172150852 +0000 UTC m=+775.293683414" watchObservedRunningTime="2025-11-21 13:45:01.174954701 +0000 UTC m=+775.296487253" Nov 21 13:45:01 crc kubenswrapper[4904]: I1121 13:45:01.178480 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" podStartSLOduration=1.178470799 podStartE2EDuration="1.178470799s" podCreationTimestamp="2025-11-21 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:45:01.150740553 +0000 UTC m=+775.272273135" watchObservedRunningTime="2025-11-21 13:45:01.178470799 +0000 UTC m=+775.300003371" Nov 21 13:45:02 crc kubenswrapper[4904]: I1121 13:45:02.142191 4904 generic.go:334] "Generic (PLEG): container finished" podID="713735aa-1fd7-4ee5-8106-9be266844186" containerID="5acbc35650d7bdf78a2e0a7d659d083b409421928c437f46f034205818038033" exitCode=0 Nov 21 13:45:02 crc kubenswrapper[4904]: I1121 13:45:02.142298 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" event={"ID":"713735aa-1fd7-4ee5-8106-9be266844186","Type":"ContainerDied","Data":"5acbc35650d7bdf78a2e0a7d659d083b409421928c437f46f034205818038033"} Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.505019 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.607421 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cgrw\" (UniqueName: \"kubernetes.io/projected/713735aa-1fd7-4ee5-8106-9be266844186-kube-api-access-5cgrw\") pod \"713735aa-1fd7-4ee5-8106-9be266844186\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.607698 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713735aa-1fd7-4ee5-8106-9be266844186-config-volume\") pod \"713735aa-1fd7-4ee5-8106-9be266844186\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.607766 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713735aa-1fd7-4ee5-8106-9be266844186-secret-volume\") pod \"713735aa-1fd7-4ee5-8106-9be266844186\" (UID: \"713735aa-1fd7-4ee5-8106-9be266844186\") " Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.608537 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/713735aa-1fd7-4ee5-8106-9be266844186-config-volume" (OuterVolumeSpecName: "config-volume") pod "713735aa-1fd7-4ee5-8106-9be266844186" (UID: "713735aa-1fd7-4ee5-8106-9be266844186"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.609765 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/713735aa-1fd7-4ee5-8106-9be266844186-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.614503 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713735aa-1fd7-4ee5-8106-9be266844186-kube-api-access-5cgrw" (OuterVolumeSpecName: "kube-api-access-5cgrw") pod "713735aa-1fd7-4ee5-8106-9be266844186" (UID: "713735aa-1fd7-4ee5-8106-9be266844186"). InnerVolumeSpecName "kube-api-access-5cgrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.614889 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713735aa-1fd7-4ee5-8106-9be266844186-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "713735aa-1fd7-4ee5-8106-9be266844186" (UID: "713735aa-1fd7-4ee5-8106-9be266844186"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.711438 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/713735aa-1fd7-4ee5-8106-9be266844186-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:03 crc kubenswrapper[4904]: I1121 13:45:03.711492 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cgrw\" (UniqueName: \"kubernetes.io/projected/713735aa-1fd7-4ee5-8106-9be266844186-kube-api-access-5cgrw\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:04 crc kubenswrapper[4904]: I1121 13:45:04.159261 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" event={"ID":"713735aa-1fd7-4ee5-8106-9be266844186","Type":"ContainerDied","Data":"d73072e394376c006b48ac38a23d92db0b36dc4deddfdb2d061155534e09772a"} Nov 21 13:45:04 crc kubenswrapper[4904]: I1121 13:45:04.159324 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d73072e394376c006b48ac38a23d92db0b36dc4deddfdb2d061155534e09772a" Nov 21 13:45:04 crc kubenswrapper[4904]: I1121 13:45:04.159405 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.008856 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.009461 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.060086 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.258852 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.406539 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc"] Nov 21 13:45:08 crc kubenswrapper[4904]: E1121 13:45:08.406964 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713735aa-1fd7-4ee5-8106-9be266844186" containerName="collect-profiles" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.406991 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="713735aa-1fd7-4ee5-8106-9be266844186" containerName="collect-profiles" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.407129 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="713735aa-1fd7-4ee5-8106-9be266844186" containerName="collect-profiles" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.408329 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.412063 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.420216 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc"] Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.579517 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.579572 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqb6v\" (UniqueName: \"kubernetes.io/projected/1060d3aa-9053-4e42-bc60-ff037f067cb9-kube-api-access-cqb6v\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.579603 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.599691 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz"] Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.600832 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.611064 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz"] Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.680906 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.680959 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.681007 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqb6v\" (UniqueName: \"kubernetes.io/projected/1060d3aa-9053-4e42-bc60-ff037f067cb9-kube-api-access-cqb6v\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.681037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.681094 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.681116 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr2kx\" (UniqueName: \"kubernetes.io/projected/8071e483-279a-45ad-a73e-c5d487e982d0-kube-api-access-hr2kx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.681540 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.681834 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.700318 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqb6v\" (UniqueName: \"kubernetes.io/projected/1060d3aa-9053-4e42-bc60-ff037f067cb9-kube-api-access-cqb6v\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.779528 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.782597 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.782684 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr2kx\" (UniqueName: \"kubernetes.io/projected/8071e483-279a-45ad-a73e-c5d487e982d0-kube-api-access-hr2kx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.782735 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.783161 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.783297 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.802273 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr2kx\" (UniqueName: \"kubernetes.io/projected/8071e483-279a-45ad-a73e-c5d487e982d0-kube-api-access-hr2kx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:08 crc kubenswrapper[4904]: I1121 13:45:08.919178 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:09 crc kubenswrapper[4904]: I1121 13:45:09.213473 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc"] Nov 21 13:45:09 crc kubenswrapper[4904]: W1121 13:45:09.220335 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1060d3aa_9053_4e42_bc60_ff037f067cb9.slice/crio-e735b6fd95ce0c16748f2df764250d40afa390183c380ae8f1574ced6464c528 WatchSource:0}: Error finding container e735b6fd95ce0c16748f2df764250d40afa390183c380ae8f1574ced6464c528: Status 404 returned error can't find the container with id e735b6fd95ce0c16748f2df764250d40afa390183c380ae8f1574ced6464c528 Nov 21 13:45:09 crc kubenswrapper[4904]: I1121 13:45:09.317154 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz"] Nov 21 13:45:09 crc kubenswrapper[4904]: W1121 13:45:09.322646 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8071e483_279a_45ad_a73e_c5d487e982d0.slice/crio-26cc7868b81c09a2d4d411e57dd456517f6ef4c750adf37879fedf15ffdd72ce WatchSource:0}: Error finding container 26cc7868b81c09a2d4d411e57dd456517f6ef4c750adf37879fedf15ffdd72ce: Status 404 returned error can't find the container with id 26cc7868b81c09a2d4d411e57dd456517f6ef4c750adf37879fedf15ffdd72ce Nov 21 13:45:10 crc kubenswrapper[4904]: I1121 13:45:10.204152 4904 generic.go:334] "Generic (PLEG): container finished" podID="8071e483-279a-45ad-a73e-c5d487e982d0" containerID="2861f02990fd50317d6d033e47af03a9cc4033bddf79416600d7a00279474613" exitCode=0 Nov 21 13:45:10 crc kubenswrapper[4904]: I1121 13:45:10.204280 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" event={"ID":"8071e483-279a-45ad-a73e-c5d487e982d0","Type":"ContainerDied","Data":"2861f02990fd50317d6d033e47af03a9cc4033bddf79416600d7a00279474613"} Nov 21 13:45:10 crc kubenswrapper[4904]: I1121 13:45:10.204338 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" event={"ID":"8071e483-279a-45ad-a73e-c5d487e982d0","Type":"ContainerStarted","Data":"26cc7868b81c09a2d4d411e57dd456517f6ef4c750adf37879fedf15ffdd72ce"} Nov 21 13:45:10 crc kubenswrapper[4904]: I1121 13:45:10.206000 4904 generic.go:334] "Generic (PLEG): container finished" podID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerID="ee07f23a1d75d9500aebaa976e058afa2794be7bc2a8542d56c7c4b2885bd850" exitCode=0 Nov 21 13:45:10 crc kubenswrapper[4904]: I1121 13:45:10.206046 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" event={"ID":"1060d3aa-9053-4e42-bc60-ff037f067cb9","Type":"ContainerDied","Data":"ee07f23a1d75d9500aebaa976e058afa2794be7bc2a8542d56c7c4b2885bd850"} Nov 21 13:45:10 crc kubenswrapper[4904]: I1121 13:45:10.206083 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" event={"ID":"1060d3aa-9053-4e42-bc60-ff037f067cb9","Type":"ContainerStarted","Data":"e735b6fd95ce0c16748f2df764250d40afa390183c380ae8f1574ced6464c528"} Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.225038 4904 generic.go:334] "Generic (PLEG): container finished" podID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerID="6dcef83c5c873e5d4e3127cde2ede2519340b29b4743e36e50b6e452c8c17647" exitCode=0 Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.225120 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" event={"ID":"1060d3aa-9053-4e42-bc60-ff037f067cb9","Type":"ContainerDied","Data":"6dcef83c5c873e5d4e3127cde2ede2519340b29b4743e36e50b6e452c8c17647"} Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.550953 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2rmvh"] Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.552408 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.562358 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rmvh"] Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.634320 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bcz2\" (UniqueName: \"kubernetes.io/projected/51cd2634-9a08-4629-9f2c-43e3995c4410-kube-api-access-8bcz2\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.634419 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-catalog-content\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.634470 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-utilities\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.736634 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bcz2\" (UniqueName: \"kubernetes.io/projected/51cd2634-9a08-4629-9f2c-43e3995c4410-kube-api-access-8bcz2\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.736765 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-catalog-content\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.736813 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-utilities\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.737365 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-utilities\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.737607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-catalog-content\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.760402 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bcz2\" (UniqueName: \"kubernetes.io/projected/51cd2634-9a08-4629-9f2c-43e3995c4410-kube-api-access-8bcz2\") pod \"certified-operators-2rmvh\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:12 crc kubenswrapper[4904]: I1121 13:45:12.894283 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.141784 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdchf"] Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.142437 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zdchf" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="registry-server" containerID="cri-o://263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c" gracePeriod=2 Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.236589 4904 generic.go:334] "Generic (PLEG): container finished" podID="8071e483-279a-45ad-a73e-c5d487e982d0" containerID="d90fef81b3b9896ce0b04b1b93962d0bb2228d4f2781df9888c43abf2b60fecd" exitCode=0 Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.236754 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" event={"ID":"8071e483-279a-45ad-a73e-c5d487e982d0","Type":"ContainerDied","Data":"d90fef81b3b9896ce0b04b1b93962d0bb2228d4f2781df9888c43abf2b60fecd"} Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.249142 4904 generic.go:334] "Generic (PLEG): container finished" podID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerID="d14be76834b3606297f779ccbf645efb0d6457b491434ac94e9cf410ff33c703" exitCode=0 Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.249196 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" event={"ID":"1060d3aa-9053-4e42-bc60-ff037f067cb9","Type":"ContainerDied","Data":"d14be76834b3606297f779ccbf645efb0d6457b491434ac94e9cf410ff33c703"} Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.392162 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2rmvh"] Nov 21 13:45:13 crc kubenswrapper[4904]: W1121 13:45:13.411579 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51cd2634_9a08_4629_9f2c_43e3995c4410.slice/crio-4310ddb744b775599b13ef066a0ead6639f96b15ba0ea0dd6e9cb530ef17e1f6 WatchSource:0}: Error finding container 4310ddb744b775599b13ef066a0ead6639f96b15ba0ea0dd6e9cb530ef17e1f6: Status 404 returned error can't find the container with id 4310ddb744b775599b13ef066a0ead6639f96b15ba0ea0dd6e9cb530ef17e1f6 Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.671602 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.754320 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-catalog-content\") pod \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.754411 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-utilities\") pod \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.754477 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95c96\" (UniqueName: \"kubernetes.io/projected/316f3835-c264-4fa4-be95-aaeb15aeeb7a-kube-api-access-95c96\") pod \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\" (UID: \"316f3835-c264-4fa4-be95-aaeb15aeeb7a\") " Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.755436 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-utilities" (OuterVolumeSpecName: "utilities") pod "316f3835-c264-4fa4-be95-aaeb15aeeb7a" (UID: "316f3835-c264-4fa4-be95-aaeb15aeeb7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.761954 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316f3835-c264-4fa4-be95-aaeb15aeeb7a-kube-api-access-95c96" (OuterVolumeSpecName: "kube-api-access-95c96") pod "316f3835-c264-4fa4-be95-aaeb15aeeb7a" (UID: "316f3835-c264-4fa4-be95-aaeb15aeeb7a"). InnerVolumeSpecName "kube-api-access-95c96". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.774928 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "316f3835-c264-4fa4-be95-aaeb15aeeb7a" (UID: "316f3835-c264-4fa4-be95-aaeb15aeeb7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.856127 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95c96\" (UniqueName: \"kubernetes.io/projected/316f3835-c264-4fa4-be95-aaeb15aeeb7a-kube-api-access-95c96\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.856166 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:13 crc kubenswrapper[4904]: I1121 13:45:13.856176 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316f3835-c264-4fa4-be95-aaeb15aeeb7a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.259876 4904 generic.go:334] "Generic (PLEG): container finished" podID="8071e483-279a-45ad-a73e-c5d487e982d0" containerID="90bbe271baedd59cc05825341ca757ee156a4daa15bd6b876956109c553e21a6" exitCode=0 Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.259982 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" event={"ID":"8071e483-279a-45ad-a73e-c5d487e982d0","Type":"ContainerDied","Data":"90bbe271baedd59cc05825341ca757ee156a4daa15bd6b876956109c553e21a6"} Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.262268 4904 generic.go:334] "Generic (PLEG): container finished" podID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerID="263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c" exitCode=0 Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.262329 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdchf" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.262387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdchf" event={"ID":"316f3835-c264-4fa4-be95-aaeb15aeeb7a","Type":"ContainerDied","Data":"263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c"} Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.262462 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdchf" event={"ID":"316f3835-c264-4fa4-be95-aaeb15aeeb7a","Type":"ContainerDied","Data":"a2ff318234a2740bca71343ac1fe81eb653f0e85522368e05bf090008ec1e3a4"} Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.262493 4904 scope.go:117] "RemoveContainer" containerID="263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.264544 4904 generic.go:334] "Generic (PLEG): container finished" podID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerID="c2ce1c19b9e7e7191d7768911b471328296255371dc6c84f55b6ee84471711ad" exitCode=0 Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.265557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerDied","Data":"c2ce1c19b9e7e7191d7768911b471328296255371dc6c84f55b6ee84471711ad"} Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.265579 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerStarted","Data":"4310ddb744b775599b13ef066a0ead6639f96b15ba0ea0dd6e9cb530ef17e1f6"} Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.279957 4904 scope.go:117] "RemoveContainer" containerID="2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.322776 4904 scope.go:117] "RemoveContainer" containerID="f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.345320 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdchf"] Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.351597 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdchf"] Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.355353 4904 scope.go:117] "RemoveContainer" containerID="263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c" Nov 21 13:45:14 crc kubenswrapper[4904]: E1121 13:45:14.356316 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c\": container with ID starting with 263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c not found: ID does not exist" containerID="263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.356353 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c"} err="failed to get container status \"263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c\": rpc error: code = NotFound desc = could not find container \"263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c\": container with ID starting with 263b01b271c7b87f6d6c69e344d0d0b31a7b4182145ff444539f8ee52ff55f5c not found: ID does not exist" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.356372 4904 scope.go:117] "RemoveContainer" containerID="2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f" Nov 21 13:45:14 crc kubenswrapper[4904]: E1121 13:45:14.356812 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f\": container with ID starting with 2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f not found: ID does not exist" containerID="2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.356858 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f"} err="failed to get container status \"2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f\": rpc error: code = NotFound desc = could not find container \"2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f\": container with ID starting with 2d4e6dc8be6045252daff9a44eb5a6b9bd277f568005941b02d0a97cb3d0753f not found: ID does not exist" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.356887 4904 scope.go:117] "RemoveContainer" containerID="f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6" Nov 21 13:45:14 crc kubenswrapper[4904]: E1121 13:45:14.357338 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6\": container with ID starting with f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6 not found: ID does not exist" containerID="f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.357360 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6"} err="failed to get container status \"f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6\": rpc error: code = NotFound desc = could not find container \"f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6\": container with ID starting with f6ce612ad2ae0aa245e78ed3832a61a37f361a3e4d0a995d4127563b6b22fad6 not found: ID does not exist" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.523004 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" path="/var/lib/kubelet/pods/316f3835-c264-4fa4-be95-aaeb15aeeb7a/volumes" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.680558 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.768856 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqb6v\" (UniqueName: \"kubernetes.io/projected/1060d3aa-9053-4e42-bc60-ff037f067cb9-kube-api-access-cqb6v\") pod \"1060d3aa-9053-4e42-bc60-ff037f067cb9\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.768935 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-bundle\") pod \"1060d3aa-9053-4e42-bc60-ff037f067cb9\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.768986 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-util\") pod \"1060d3aa-9053-4e42-bc60-ff037f067cb9\" (UID: \"1060d3aa-9053-4e42-bc60-ff037f067cb9\") " Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.770389 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-bundle" (OuterVolumeSpecName: "bundle") pod "1060d3aa-9053-4e42-bc60-ff037f067cb9" (UID: "1060d3aa-9053-4e42-bc60-ff037f067cb9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.772220 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1060d3aa-9053-4e42-bc60-ff037f067cb9-kube-api-access-cqb6v" (OuterVolumeSpecName: "kube-api-access-cqb6v") pod "1060d3aa-9053-4e42-bc60-ff037f067cb9" (UID: "1060d3aa-9053-4e42-bc60-ff037f067cb9"). InnerVolumeSpecName "kube-api-access-cqb6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.782411 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-util" (OuterVolumeSpecName: "util") pod "1060d3aa-9053-4e42-bc60-ff037f067cb9" (UID: "1060d3aa-9053-4e42-bc60-ff037f067cb9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.871812 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqb6v\" (UniqueName: \"kubernetes.io/projected/1060d3aa-9053-4e42-bc60-ff037f067cb9-kube-api-access-cqb6v\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.871888 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:14 crc kubenswrapper[4904]: I1121 13:45:14.871900 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1060d3aa-9053-4e42-bc60-ff037f067cb9-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.274510 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" event={"ID":"1060d3aa-9053-4e42-bc60-ff037f067cb9","Type":"ContainerDied","Data":"e735b6fd95ce0c16748f2df764250d40afa390183c380ae8f1574ced6464c528"} Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.274543 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e735b6fd95ce0c16748f2df764250d40afa390183c380ae8f1574ced6464c528" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.274588 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.276207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerStarted","Data":"e37f7d3d9b22abf05ba238cbc002ab8dd2d95af1e0008530c9fe60d89cdcb17e"} Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.652101 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.783454 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-bundle\") pod \"8071e483-279a-45ad-a73e-c5d487e982d0\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.783571 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr2kx\" (UniqueName: \"kubernetes.io/projected/8071e483-279a-45ad-a73e-c5d487e982d0-kube-api-access-hr2kx\") pod \"8071e483-279a-45ad-a73e-c5d487e982d0\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.783593 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-util\") pod \"8071e483-279a-45ad-a73e-c5d487e982d0\" (UID: \"8071e483-279a-45ad-a73e-c5d487e982d0\") " Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.784823 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-bundle" (OuterVolumeSpecName: "bundle") pod "8071e483-279a-45ad-a73e-c5d487e982d0" (UID: "8071e483-279a-45ad-a73e-c5d487e982d0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.794770 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-util" (OuterVolumeSpecName: "util") pod "8071e483-279a-45ad-a73e-c5d487e982d0" (UID: "8071e483-279a-45ad-a73e-c5d487e982d0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.794976 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8071e483-279a-45ad-a73e-c5d487e982d0-kube-api-access-hr2kx" (OuterVolumeSpecName: "kube-api-access-hr2kx") pod "8071e483-279a-45ad-a73e-c5d487e982d0" (UID: "8071e483-279a-45ad-a73e-c5d487e982d0"). InnerVolumeSpecName "kube-api-access-hr2kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.884694 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr2kx\" (UniqueName: \"kubernetes.io/projected/8071e483-279a-45ad-a73e-c5d487e982d0-kube-api-access-hr2kx\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.884747 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:15 crc kubenswrapper[4904]: I1121 13:45:15.884757 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8071e483-279a-45ad-a73e-c5d487e982d0-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:16 crc kubenswrapper[4904]: I1121 13:45:16.286207 4904 generic.go:334] "Generic (PLEG): container finished" podID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerID="e37f7d3d9b22abf05ba238cbc002ab8dd2d95af1e0008530c9fe60d89cdcb17e" exitCode=0 Nov 21 13:45:16 crc kubenswrapper[4904]: I1121 13:45:16.286293 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerDied","Data":"e37f7d3d9b22abf05ba238cbc002ab8dd2d95af1e0008530c9fe60d89cdcb17e"} Nov 21 13:45:16 crc kubenswrapper[4904]: I1121 13:45:16.289109 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" event={"ID":"8071e483-279a-45ad-a73e-c5d487e982d0","Type":"ContainerDied","Data":"26cc7868b81c09a2d4d411e57dd456517f6ef4c750adf37879fedf15ffdd72ce"} Nov 21 13:45:16 crc kubenswrapper[4904]: I1121 13:45:16.289170 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26cc7868b81c09a2d4d411e57dd456517f6ef4c750adf37879fedf15ffdd72ce" Nov 21 13:45:16 crc kubenswrapper[4904]: I1121 13:45:16.289265 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.298922 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerStarted","Data":"39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d"} Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.324927 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2rmvh" podStartSLOduration=2.886253593 podStartE2EDuration="5.324902608s" podCreationTimestamp="2025-11-21 13:45:12 +0000 UTC" firstStartedPulling="2025-11-21 13:45:14.270001266 +0000 UTC m=+788.391533808" lastFinishedPulling="2025-11-21 13:45:16.708650211 +0000 UTC m=+790.830182823" observedRunningTime="2025-11-21 13:45:17.321073262 +0000 UTC m=+791.442605835" watchObservedRunningTime="2025-11-21 13:45:17.324902608 +0000 UTC m=+791.446435170" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.352871 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f9nmd"] Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353296 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="util" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353329 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="util" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353352 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="extract" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353364 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="extract" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353382 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="extract-utilities" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353400 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="extract-utilities" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353417 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="util" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353430 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="util" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353450 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="registry-server" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353460 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="registry-server" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353475 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="pull" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353490 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="pull" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353510 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="pull" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353522 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="pull" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353541 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="extract" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353552 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="extract" Nov 21 13:45:17 crc kubenswrapper[4904]: E1121 13:45:17.353571 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="extract-content" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353584 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="extract-content" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353797 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="316f3835-c264-4fa4-be95-aaeb15aeeb7a" containerName="registry-server" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353825 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8071e483-279a-45ad-a73e-c5d487e982d0" containerName="extract" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.353838 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1060d3aa-9053-4e42-bc60-ff037f067cb9" containerName="extract" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.355275 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.373751 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9nmd"] Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.509039 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-utilities\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.509125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5j8w\" (UniqueName: \"kubernetes.io/projected/45292b6c-0c90-4602-b259-ab1b89608db0-kube-api-access-d5j8w\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.509172 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-catalog-content\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.610048 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-utilities\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.610394 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5j8w\" (UniqueName: \"kubernetes.io/projected/45292b6c-0c90-4602-b259-ab1b89608db0-kube-api-access-d5j8w\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.610504 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-catalog-content\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.610543 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-utilities\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.610862 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-catalog-content\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.631983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5j8w\" (UniqueName: \"kubernetes.io/projected/45292b6c-0c90-4602-b259-ab1b89608db0-kube-api-access-d5j8w\") pod \"redhat-operators-f9nmd\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.674178 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.954392 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nwzcj"] Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.956295 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:17 crc kubenswrapper[4904]: I1121 13:45:17.963070 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwzcj"] Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.116826 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-catalog-content\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.116936 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55rxs\" (UniqueName: \"kubernetes.io/projected/273251c7-978b-489a-9c44-388139114762-kube-api-access-55rxs\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.116988 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-utilities\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.180003 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f9nmd"] Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.220918 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55rxs\" (UniqueName: \"kubernetes.io/projected/273251c7-978b-489a-9c44-388139114762-kube-api-access-55rxs\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.221004 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-utilities\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.221047 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-catalog-content\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.221618 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-catalog-content\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.222316 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-utilities\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.265342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55rxs\" (UniqueName: \"kubernetes.io/projected/273251c7-978b-489a-9c44-388139114762-kube-api-access-55rxs\") pod \"community-operators-nwzcj\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.273967 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.306445 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerStarted","Data":"c577e1439157be36397a9f91640336f3363c1d44629b4ba2307659127036565d"} Nov 21 13:45:18 crc kubenswrapper[4904]: I1121 13:45:18.839452 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwzcj"] Nov 21 13:45:18 crc kubenswrapper[4904]: W1121 13:45:18.845382 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod273251c7_978b_489a_9c44_388139114762.slice/crio-f359b9b8fa4352e06b905c40934665dc9a54dbb5196f964619d426c5439fb18d WatchSource:0}: Error finding container f359b9b8fa4352e06b905c40934665dc9a54dbb5196f964619d426c5439fb18d: Status 404 returned error can't find the container with id f359b9b8fa4352e06b905c40934665dc9a54dbb5196f964619d426c5439fb18d Nov 21 13:45:19 crc kubenswrapper[4904]: I1121 13:45:19.314582 4904 generic.go:334] "Generic (PLEG): container finished" podID="45292b6c-0c90-4602-b259-ab1b89608db0" containerID="9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947" exitCode=0 Nov 21 13:45:19 crc kubenswrapper[4904]: I1121 13:45:19.314699 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerDied","Data":"9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947"} Nov 21 13:45:19 crc kubenswrapper[4904]: I1121 13:45:19.317715 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwzcj" event={"ID":"273251c7-978b-489a-9c44-388139114762","Type":"ContainerDied","Data":"36b01778617c998aaf07cf61afb0334e00722a9fca77cb0f6301650a14b166b7"} Nov 21 13:45:19 crc kubenswrapper[4904]: I1121 13:45:19.317620 4904 generic.go:334] "Generic (PLEG): container finished" podID="273251c7-978b-489a-9c44-388139114762" containerID="36b01778617c998aaf07cf61afb0334e00722a9fca77cb0f6301650a14b166b7" exitCode=0 Nov 21 13:45:19 crc kubenswrapper[4904]: I1121 13:45:19.318832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwzcj" event={"ID":"273251c7-978b-489a-9c44-388139114762","Type":"ContainerStarted","Data":"f359b9b8fa4352e06b905c40934665dc9a54dbb5196f964619d426c5439fb18d"} Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.333367 4904 generic.go:334] "Generic (PLEG): container finished" podID="273251c7-978b-489a-9c44-388139114762" containerID="73c7a671577cd731e168a342b087ec3657a6c93ec5e00e9c8cd10a3ed344d1a3" exitCode=0 Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.333407 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwzcj" event={"ID":"273251c7-978b-489a-9c44-388139114762","Type":"ContainerDied","Data":"73c7a671577cd731e168a342b087ec3657a6c93ec5e00e9c8cd10a3ed344d1a3"} Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.558300 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-pc7gc"] Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.559502 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.562775 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.563179 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-tg5m8" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.564062 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.585219 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-pc7gc"] Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.689211 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnw5m\" (UniqueName: \"kubernetes.io/projected/24f73a17-3c0e-4e6b-9a16-461582908e22-kube-api-access-xnw5m\") pod \"cluster-logging-operator-ff9846bd-pc7gc\" (UID: \"24f73a17-3c0e-4e6b-9a16-461582908e22\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.791822 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnw5m\" (UniqueName: \"kubernetes.io/projected/24f73a17-3c0e-4e6b-9a16-461582908e22-kube-api-access-xnw5m\") pod \"cluster-logging-operator-ff9846bd-pc7gc\" (UID: \"24f73a17-3c0e-4e6b-9a16-461582908e22\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.813492 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnw5m\" (UniqueName: \"kubernetes.io/projected/24f73a17-3c0e-4e6b-9a16-461582908e22-kube-api-access-xnw5m\") pod \"cluster-logging-operator-ff9846bd-pc7gc\" (UID: \"24f73a17-3c0e-4e6b-9a16-461582908e22\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" Nov 21 13:45:21 crc kubenswrapper[4904]: I1121 13:45:21.881588 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" Nov 21 13:45:22 crc kubenswrapper[4904]: I1121 13:45:22.343185 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwzcj" event={"ID":"273251c7-978b-489a-9c44-388139114762","Type":"ContainerStarted","Data":"d4a3e2f26e5c508e7d9e84dc5b39f231dac16759a1caf3f311b6772dd2ff911a"} Nov 21 13:45:22 crc kubenswrapper[4904]: I1121 13:45:22.374422 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nwzcj" podStartSLOduration=2.766872074 podStartE2EDuration="5.374403517s" podCreationTimestamp="2025-11-21 13:45:17 +0000 UTC" firstStartedPulling="2025-11-21 13:45:19.318675216 +0000 UTC m=+793.440207768" lastFinishedPulling="2025-11-21 13:45:21.926206649 +0000 UTC m=+796.047739211" observedRunningTime="2025-11-21 13:45:22.371060515 +0000 UTC m=+796.492593067" watchObservedRunningTime="2025-11-21 13:45:22.374403517 +0000 UTC m=+796.495936069" Nov 21 13:45:22 crc kubenswrapper[4904]: I1121 13:45:22.405168 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-pc7gc"] Nov 21 13:45:22 crc kubenswrapper[4904]: I1121 13:45:22.894812 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:22 crc kubenswrapper[4904]: I1121 13:45:22.895322 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:22 crc kubenswrapper[4904]: I1121 13:45:22.942808 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:23 crc kubenswrapper[4904]: I1121 13:45:23.363255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" event={"ID":"24f73a17-3c0e-4e6b-9a16-461582908e22","Type":"ContainerStarted","Data":"9c22ff6dcb71161ca5014c61c69dcc487ec4cb3938d9bf3857681847a24bd91f"} Nov 21 13:45:23 crc kubenswrapper[4904]: I1121 13:45:23.415978 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:28 crc kubenswrapper[4904]: I1121 13:45:28.275117 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:28 crc kubenswrapper[4904]: I1121 13:45:28.275686 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:28 crc kubenswrapper[4904]: I1121 13:45:28.330590 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:28 crc kubenswrapper[4904]: I1121 13:45:28.444339 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:29 crc kubenswrapper[4904]: I1121 13:45:29.139811 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2rmvh"] Nov 21 13:45:29 crc kubenswrapper[4904]: I1121 13:45:29.140620 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2rmvh" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="registry-server" containerID="cri-o://39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d" gracePeriod=2 Nov 21 13:45:29 crc kubenswrapper[4904]: I1121 13:45:29.411988 4904 generic.go:334] "Generic (PLEG): container finished" podID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerID="39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d" exitCode=0 Nov 21 13:45:29 crc kubenswrapper[4904]: I1121 13:45:29.412095 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerDied","Data":"39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d"} Nov 21 13:45:29 crc kubenswrapper[4904]: I1121 13:45:29.742328 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwzcj"] Nov 21 13:45:29 crc kubenswrapper[4904]: I1121 13:45:29.999440 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch"] Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.001038 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.005545 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.006456 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.006777 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-wrhm8" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.007242 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.007610 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.009478 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.024139 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch"] Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.114439 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-apiservice-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.114488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-webhook-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.114679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rmcv\" (UniqueName: \"kubernetes.io/projected/d9a95797-5a30-4849-8c30-13f3ff99a4c2-kube-api-access-6rmcv\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.114725 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d9a95797-5a30-4849-8c30-13f3ff99a4c2-manager-config\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.114796 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.216589 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-apiservice-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.216646 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-webhook-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.216726 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rmcv\" (UniqueName: \"kubernetes.io/projected/d9a95797-5a30-4849-8c30-13f3ff99a4c2-kube-api-access-6rmcv\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.216749 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d9a95797-5a30-4849-8c30-13f3ff99a4c2-manager-config\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.216780 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.218000 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d9a95797-5a30-4849-8c30-13f3ff99a4c2-manager-config\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.222775 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-webhook-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.223091 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-apiservice-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.228565 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d9a95797-5a30-4849-8c30-13f3ff99a4c2-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.245579 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rmcv\" (UniqueName: \"kubernetes.io/projected/d9a95797-5a30-4849-8c30-13f3ff99a4c2-kube-api-access-6rmcv\") pod \"loki-operator-controller-manager-5fc6c85b79-9lzch\" (UID: \"d9a95797-5a30-4849-8c30-13f3ff99a4c2\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.316698 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:30 crc kubenswrapper[4904]: I1121 13:45:30.417938 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nwzcj" podUID="273251c7-978b-489a-9c44-388139114762" containerName="registry-server" containerID="cri-o://d4a3e2f26e5c508e7d9e84dc5b39f231dac16759a1caf3f311b6772dd2ff911a" gracePeriod=2 Nov 21 13:45:31 crc kubenswrapper[4904]: I1121 13:45:31.026270 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch"] Nov 21 13:45:31 crc kubenswrapper[4904]: I1121 13:45:31.426812 4904 generic.go:334] "Generic (PLEG): container finished" podID="273251c7-978b-489a-9c44-388139114762" containerID="d4a3e2f26e5c508e7d9e84dc5b39f231dac16759a1caf3f311b6772dd2ff911a" exitCode=0 Nov 21 13:45:31 crc kubenswrapper[4904]: I1121 13:45:31.426853 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwzcj" event={"ID":"273251c7-978b-489a-9c44-388139114762","Type":"ContainerDied","Data":"d4a3e2f26e5c508e7d9e84dc5b39f231dac16759a1caf3f311b6772dd2ff911a"} Nov 21 13:45:31 crc kubenswrapper[4904]: I1121 13:45:31.431124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerStarted","Data":"fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e"} Nov 21 13:45:32 crc kubenswrapper[4904]: I1121 13:45:32.441155 4904 generic.go:334] "Generic (PLEG): container finished" podID="45292b6c-0c90-4602-b259-ab1b89608db0" containerID="fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e" exitCode=0 Nov 21 13:45:32 crc kubenswrapper[4904]: I1121 13:45:32.441226 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerDied","Data":"fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e"} Nov 21 13:45:32 crc kubenswrapper[4904]: E1121 13:45:32.896646 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d is running failed: container process not found" containerID="39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:45:32 crc kubenswrapper[4904]: E1121 13:45:32.897085 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d is running failed: container process not found" containerID="39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:45:32 crc kubenswrapper[4904]: E1121 13:45:32.897377 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d is running failed: container process not found" containerID="39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d" cmd=["grpc_health_probe","-addr=:50051"] Nov 21 13:45:32 crc kubenswrapper[4904]: E1121 13:45:32.897417 4904 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-2rmvh" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="registry-server" Nov 21 13:45:34 crc kubenswrapper[4904]: W1121 13:45:34.652297 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9a95797_5a30_4849_8c30_13f3ff99a4c2.slice/crio-12766cf7932d92d97127e9f83de45a7a663961a13671e450c1547a44d1c4177f WatchSource:0}: Error finding container 12766cf7932d92d97127e9f83de45a7a663961a13671e450c1547a44d1c4177f: Status 404 returned error can't find the container with id 12766cf7932d92d97127e9f83de45a7a663961a13671e450c1547a44d1c4177f Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.703395 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.712100 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.792482 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-utilities\") pod \"51cd2634-9a08-4629-9f2c-43e3995c4410\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.792869 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55rxs\" (UniqueName: \"kubernetes.io/projected/273251c7-978b-489a-9c44-388139114762-kube-api-access-55rxs\") pod \"273251c7-978b-489a-9c44-388139114762\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.792898 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-utilities\") pod \"273251c7-978b-489a-9c44-388139114762\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.792928 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-catalog-content\") pod \"273251c7-978b-489a-9c44-388139114762\" (UID: \"273251c7-978b-489a-9c44-388139114762\") " Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.792984 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-catalog-content\") pod \"51cd2634-9a08-4629-9f2c-43e3995c4410\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.793025 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bcz2\" (UniqueName: \"kubernetes.io/projected/51cd2634-9a08-4629-9f2c-43e3995c4410-kube-api-access-8bcz2\") pod \"51cd2634-9a08-4629-9f2c-43e3995c4410\" (UID: \"51cd2634-9a08-4629-9f2c-43e3995c4410\") " Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.793263 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-utilities" (OuterVolumeSpecName: "utilities") pod "51cd2634-9a08-4629-9f2c-43e3995c4410" (UID: "51cd2634-9a08-4629-9f2c-43e3995c4410"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.793396 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.794416 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-utilities" (OuterVolumeSpecName: "utilities") pod "273251c7-978b-489a-9c44-388139114762" (UID: "273251c7-978b-489a-9c44-388139114762"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.799957 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/273251c7-978b-489a-9c44-388139114762-kube-api-access-55rxs" (OuterVolumeSpecName: "kube-api-access-55rxs") pod "273251c7-978b-489a-9c44-388139114762" (UID: "273251c7-978b-489a-9c44-388139114762"). InnerVolumeSpecName "kube-api-access-55rxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.803286 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cd2634-9a08-4629-9f2c-43e3995c4410-kube-api-access-8bcz2" (OuterVolumeSpecName: "kube-api-access-8bcz2") pod "51cd2634-9a08-4629-9f2c-43e3995c4410" (UID: "51cd2634-9a08-4629-9f2c-43e3995c4410"). InnerVolumeSpecName "kube-api-access-8bcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.852486 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51cd2634-9a08-4629-9f2c-43e3995c4410" (UID: "51cd2634-9a08-4629-9f2c-43e3995c4410"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.885098 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "273251c7-978b-489a-9c44-388139114762" (UID: "273251c7-978b-489a-9c44-388139114762"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.894886 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55rxs\" (UniqueName: \"kubernetes.io/projected/273251c7-978b-489a-9c44-388139114762-kube-api-access-55rxs\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.894936 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.894950 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/273251c7-978b-489a-9c44-388139114762-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.894962 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51cd2634-9a08-4629-9f2c-43e3995c4410-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:34 crc kubenswrapper[4904]: I1121 13:45:34.894975 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bcz2\" (UniqueName: \"kubernetes.io/projected/51cd2634-9a08-4629-9f2c-43e3995c4410-kube-api-access-8bcz2\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.464497 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwzcj" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.464449 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwzcj" event={"ID":"273251c7-978b-489a-9c44-388139114762","Type":"ContainerDied","Data":"f359b9b8fa4352e06b905c40934665dc9a54dbb5196f964619d426c5439fb18d"} Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.465086 4904 scope.go:117] "RemoveContainer" containerID="d4a3e2f26e5c508e7d9e84dc5b39f231dac16759a1caf3f311b6772dd2ff911a" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.466426 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerStarted","Data":"244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354"} Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.474076 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2rmvh" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.474104 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2rmvh" event={"ID":"51cd2634-9a08-4629-9f2c-43e3995c4410","Type":"ContainerDied","Data":"4310ddb744b775599b13ef066a0ead6639f96b15ba0ea0dd6e9cb530ef17e1f6"} Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.475920 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" event={"ID":"d9a95797-5a30-4849-8c30-13f3ff99a4c2","Type":"ContainerStarted","Data":"12766cf7932d92d97127e9f83de45a7a663961a13671e450c1547a44d1c4177f"} Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.480347 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" event={"ID":"24f73a17-3c0e-4e6b-9a16-461582908e22","Type":"ContainerStarted","Data":"e9f368a53c11cd9240ab1025071cccbfec0b1ff98871c826e3237dc4e2d78696"} Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.488788 4904 scope.go:117] "RemoveContainer" containerID="73c7a671577cd731e168a342b087ec3657a6c93ec5e00e9c8cd10a3ed344d1a3" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.512332 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f9nmd" podStartSLOduration=2.728579405 podStartE2EDuration="18.512303603s" podCreationTimestamp="2025-11-21 13:45:17 +0000 UTC" firstStartedPulling="2025-11-21 13:45:19.316685686 +0000 UTC m=+793.438218238" lastFinishedPulling="2025-11-21 13:45:35.100409884 +0000 UTC m=+809.221942436" observedRunningTime="2025-11-21 13:45:35.489995719 +0000 UTC m=+809.611528291" watchObservedRunningTime="2025-11-21 13:45:35.512303603 +0000 UTC m=+809.633836155" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.515060 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-pc7gc" podStartSLOduration=2.2610140420000002 podStartE2EDuration="14.515047071s" podCreationTimestamp="2025-11-21 13:45:21 +0000 UTC" firstStartedPulling="2025-11-21 13:45:22.430801637 +0000 UTC m=+796.552334189" lastFinishedPulling="2025-11-21 13:45:34.684834666 +0000 UTC m=+808.806367218" observedRunningTime="2025-11-21 13:45:35.514308642 +0000 UTC m=+809.635841194" watchObservedRunningTime="2025-11-21 13:45:35.515047071 +0000 UTC m=+809.636579623" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.525983 4904 scope.go:117] "RemoveContainer" containerID="36b01778617c998aaf07cf61afb0334e00722a9fca77cb0f6301650a14b166b7" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.553144 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwzcj"] Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.556749 4904 scope.go:117] "RemoveContainer" containerID="39f58e909798e05f898578194ca0a82415bc475907bc9c713d33bdde6cba016d" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.562693 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nwzcj"] Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.571843 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2rmvh"] Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.580005 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2rmvh"] Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.589117 4904 scope.go:117] "RemoveContainer" containerID="e37f7d3d9b22abf05ba238cbc002ab8dd2d95af1e0008530c9fe60d89cdcb17e" Nov 21 13:45:35 crc kubenswrapper[4904]: I1121 13:45:35.609781 4904 scope.go:117] "RemoveContainer" containerID="c2ce1c19b9e7e7191d7768911b471328296255371dc6c84f55b6ee84471711ad" Nov 21 13:45:36 crc kubenswrapper[4904]: I1121 13:45:36.522606 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="273251c7-978b-489a-9c44-388139114762" path="/var/lib/kubelet/pods/273251c7-978b-489a-9c44-388139114762/volumes" Nov 21 13:45:36 crc kubenswrapper[4904]: I1121 13:45:36.523468 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" path="/var/lib/kubelet/pods/51cd2634-9a08-4629-9f2c-43e3995c4410/volumes" Nov 21 13:45:37 crc kubenswrapper[4904]: I1121 13:45:37.674518 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:37 crc kubenswrapper[4904]: I1121 13:45:37.675029 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:38 crc kubenswrapper[4904]: I1121 13:45:38.730500 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f9nmd" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="registry-server" probeResult="failure" output=< Nov 21 13:45:38 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:45:38 crc kubenswrapper[4904]: > Nov 21 13:45:42 crc kubenswrapper[4904]: I1121 13:45:42.536339 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" event={"ID":"d9a95797-5a30-4849-8c30-13f3ff99a4c2","Type":"ContainerStarted","Data":"ca13da907cd36bf0fd1f6e57a058e378123c55bacc49a794f01c9cf6e69cd0ed"} Nov 21 13:45:47 crc kubenswrapper[4904]: I1121 13:45:47.720040 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:47 crc kubenswrapper[4904]: I1121 13:45:47.775340 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:51 crc kubenswrapper[4904]: I1121 13:45:51.604940 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" event={"ID":"d9a95797-5a30-4849-8c30-13f3ff99a4c2","Type":"ContainerStarted","Data":"6065101b846213fe99baf6c00a847ae9f12cb9a68dc867d62c038665e0296b92"} Nov 21 13:45:51 crc kubenswrapper[4904]: I1121 13:45:51.607443 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:51 crc kubenswrapper[4904]: I1121 13:45:51.610880 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" Nov 21 13:45:51 crc kubenswrapper[4904]: I1121 13:45:51.633599 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5fc6c85b79-9lzch" podStartSLOduration=6.592241755 podStartE2EDuration="22.633573715s" podCreationTimestamp="2025-11-21 13:45:29 +0000 UTC" firstStartedPulling="2025-11-21 13:45:34.66164428 +0000 UTC m=+808.783176832" lastFinishedPulling="2025-11-21 13:45:50.70297623 +0000 UTC m=+824.824508792" observedRunningTime="2025-11-21 13:45:51.633310757 +0000 UTC m=+825.754843339" watchObservedRunningTime="2025-11-21 13:45:51.633573715 +0000 UTC m=+825.755106287" Nov 21 13:45:51 crc kubenswrapper[4904]: I1121 13:45:51.747321 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9nmd"] Nov 21 13:45:51 crc kubenswrapper[4904]: I1121 13:45:51.747771 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f9nmd" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="registry-server" containerID="cri-o://244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354" gracePeriod=2 Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.157280 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.270426 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-utilities\") pod \"45292b6c-0c90-4602-b259-ab1b89608db0\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.270565 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-catalog-content\") pod \"45292b6c-0c90-4602-b259-ab1b89608db0\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.270614 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5j8w\" (UniqueName: \"kubernetes.io/projected/45292b6c-0c90-4602-b259-ab1b89608db0-kube-api-access-d5j8w\") pod \"45292b6c-0c90-4602-b259-ab1b89608db0\" (UID: \"45292b6c-0c90-4602-b259-ab1b89608db0\") " Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.271611 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-utilities" (OuterVolumeSpecName: "utilities") pod "45292b6c-0c90-4602-b259-ab1b89608db0" (UID: "45292b6c-0c90-4602-b259-ab1b89608db0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.277872 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45292b6c-0c90-4602-b259-ab1b89608db0-kube-api-access-d5j8w" (OuterVolumeSpecName: "kube-api-access-d5j8w") pod "45292b6c-0c90-4602-b259-ab1b89608db0" (UID: "45292b6c-0c90-4602-b259-ab1b89608db0"). InnerVolumeSpecName "kube-api-access-d5j8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.352459 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45292b6c-0c90-4602-b259-ab1b89608db0" (UID: "45292b6c-0c90-4602-b259-ab1b89608db0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.372611 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.372645 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45292b6c-0c90-4602-b259-ab1b89608db0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.372676 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5j8w\" (UniqueName: \"kubernetes.io/projected/45292b6c-0c90-4602-b259-ab1b89608db0-kube-api-access-d5j8w\") on node \"crc\" DevicePath \"\"" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.613027 4904 generic.go:334] "Generic (PLEG): container finished" podID="45292b6c-0c90-4602-b259-ab1b89608db0" containerID="244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354" exitCode=0 Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.613800 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f9nmd" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.613820 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerDied","Data":"244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354"} Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.613936 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f9nmd" event={"ID":"45292b6c-0c90-4602-b259-ab1b89608db0","Type":"ContainerDied","Data":"c577e1439157be36397a9f91640336f3363c1d44629b4ba2307659127036565d"} Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.613978 4904 scope.go:117] "RemoveContainer" containerID="244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.641335 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f9nmd"] Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.643313 4904 scope.go:117] "RemoveContainer" containerID="fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.647071 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f9nmd"] Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.660844 4904 scope.go:117] "RemoveContainer" containerID="9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.678228 4904 scope.go:117] "RemoveContainer" containerID="244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354" Nov 21 13:45:52 crc kubenswrapper[4904]: E1121 13:45:52.678677 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354\": container with ID starting with 244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354 not found: ID does not exist" containerID="244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.678706 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354"} err="failed to get container status \"244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354\": rpc error: code = NotFound desc = could not find container \"244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354\": container with ID starting with 244c5a1d7e83c7523525eb8912de0c4c4df6aaf0f96f59d34262cd34710c5354 not found: ID does not exist" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.678730 4904 scope.go:117] "RemoveContainer" containerID="fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e" Nov 21 13:45:52 crc kubenswrapper[4904]: E1121 13:45:52.679035 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e\": container with ID starting with fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e not found: ID does not exist" containerID="fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.679100 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e"} err="failed to get container status \"fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e\": rpc error: code = NotFound desc = could not find container \"fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e\": container with ID starting with fe9812cde7d5d0496b1ca0472019756a3ffe3e040fe420e9832da2bf7433bb3e not found: ID does not exist" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.679142 4904 scope.go:117] "RemoveContainer" containerID="9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947" Nov 21 13:45:52 crc kubenswrapper[4904]: E1121 13:45:52.679566 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947\": container with ID starting with 9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947 not found: ID does not exist" containerID="9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947" Nov 21 13:45:52 crc kubenswrapper[4904]: I1121 13:45:52.679593 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947"} err="failed to get container status \"9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947\": rpc error: code = NotFound desc = could not find container \"9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947\": container with ID starting with 9563aea8619171e46c6a74afd0cd1020ba9bd7214d0ded8953af3a4d00f2e947 not found: ID does not exist" Nov 21 13:45:54 crc kubenswrapper[4904]: I1121 13:45:54.520421 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" path="/var/lib/kubelet/pods/45292b6c-0c90-4602-b259-ab1b89608db0/volumes" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.429958 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430475 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="extract-content" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430490 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="extract-content" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430504 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273251c7-978b-489a-9c44-388139114762" containerName="extract-content" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430511 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="273251c7-978b-489a-9c44-388139114762" containerName="extract-content" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430529 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="extract-utilities" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430537 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="extract-utilities" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430548 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430554 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430565 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="extract-content" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430572 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="extract-content" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430582 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273251c7-978b-489a-9c44-388139114762" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430589 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="273251c7-978b-489a-9c44-388139114762" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430598 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="273251c7-978b-489a-9c44-388139114762" containerName="extract-utilities" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430605 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="273251c7-978b-489a-9c44-388139114762" containerName="extract-utilities" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430617 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430623 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: E1121 13:45:56.430634 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="extract-utilities" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430641 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="extract-utilities" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430776 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="273251c7-978b-489a-9c44-388139114762" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430785 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="45292b6c-0c90-4602-b259-ab1b89608db0" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.430797 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cd2634-9a08-4629-9f2c-43e3995c4410" containerName="registry-server" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.431406 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.434383 4904 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-vg96d" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.435683 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.436329 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.447402 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.529326 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") " pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.529398 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxsn\" (UniqueName: \"kubernetes.io/projected/45551ca5-aa20-4f77-8228-1f4c7e4d42d5-kube-api-access-wjxsn\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") " pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.630703 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") " pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.631130 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjxsn\" (UniqueName: \"kubernetes.io/projected/45551ca5-aa20-4f77-8228-1f4c7e4d42d5-kube-api-access-wjxsn\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") " pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.634776 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.634832 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4b0d06f563278fd09ccba2360ed5491f5bad81dd128aedfb5e9175d87fe9214/globalmount\"" pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.656172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjxsn\" (UniqueName: \"kubernetes.io/projected/45551ca5-aa20-4f77-8228-1f4c7e4d42d5-kube-api-access-wjxsn\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") " pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.666830 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4368cfa9-7b8f-4744-acf0-54ed86eb8f56\") pod \"minio\" (UID: \"45551ca5-aa20-4f77-8228-1f4c7e4d42d5\") " pod="minio-dev/minio" Nov 21 13:45:56 crc kubenswrapper[4904]: I1121 13:45:56.749382 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 21 13:45:57 crc kubenswrapper[4904]: I1121 13:45:57.185836 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 21 13:45:57 crc kubenswrapper[4904]: I1121 13:45:57.651735 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"45551ca5-aa20-4f77-8228-1f4c7e4d42d5","Type":"ContainerStarted","Data":"2c601edc77f3654515f067563b5cdcfc45ed3b0ca2ef13000e5347b33d47e2eb"} Nov 21 13:46:00 crc kubenswrapper[4904]: I1121 13:46:00.680587 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"45551ca5-aa20-4f77-8228-1f4c7e4d42d5","Type":"ContainerStarted","Data":"edcf7a24f8a68259d3c7560a0eed8a99a2157051282c4c4a806569f4b7c9193a"} Nov 21 13:46:00 crc kubenswrapper[4904]: I1121 13:46:00.699269 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.82698126 podStartE2EDuration="7.699251681s" podCreationTimestamp="2025-11-21 13:45:53 +0000 UTC" firstStartedPulling="2025-11-21 13:45:57.197492635 +0000 UTC m=+831.319025187" lastFinishedPulling="2025-11-21 13:46:00.069763046 +0000 UTC m=+834.191295608" observedRunningTime="2025-11-21 13:46:00.696993596 +0000 UTC m=+834.818526178" watchObservedRunningTime="2025-11-21 13:46:00.699251681 +0000 UTC m=+834.820784233" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.428075 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-6rt82"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.429276 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.434298 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.434823 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.435052 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.435069 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.438772 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-s6lp9" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.443975 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-6rt82"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.542591 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-zd9gb"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.543486 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.548249 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.548834 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.548984 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.575020 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e29add-5250-42af-af50-40efeec82a2d-config\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.575633 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtr9z\" (UniqueName: \"kubernetes.io/projected/e4e29add-5250-42af-af50-40efeec82a2d-kube-api-access-rtr9z\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.575821 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.575950 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.576060 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.600695 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-zd9gb"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.634074 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.635269 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.639231 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.640206 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.655525 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679124 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679241 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679263 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e29add-5250-42af-af50-40efeec82a2d-config\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679319 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtr9z\" (UniqueName: \"kubernetes.io/projected/e4e29add-5250-42af-af50-40efeec82a2d-kube-api-access-rtr9z\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679346 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fb0ef8-1685-4951-b2db-a0921787ff1a-config\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679375 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679392 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679418 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.679436 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xgb\" (UniqueName: \"kubernetes.io/projected/93fb0ef8-1685-4951-b2db-a0921787ff1a-kube-api-access-m6xgb\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.680120 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.680326 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4e29add-5250-42af-af50-40efeec82a2d-config\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.686467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.686947 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e4e29add-5250-42af-af50-40efeec82a2d-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.697242 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtr9z\" (UniqueName: \"kubernetes.io/projected/e4e29add-5250-42af-af50-40efeec82a2d-kube-api-access-rtr9z\") pod \"logging-loki-distributor-76cc67bf56-6rt82\" (UID: \"e4e29add-5250-42af-af50-40efeec82a2d\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.739745 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.741119 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.745091 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.745241 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.745309 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.745356 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.745318 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.754082 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.764816 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.765752 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.767197 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-cjg65" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.770516 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780436 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fb0ef8-1685-4951-b2db-a0921787ff1a-config\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780473 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf77g\" (UniqueName: \"kubernetes.io/projected/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-kube-api-access-hf77g\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780537 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780601 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xgb\" (UniqueName: \"kubernetes.io/projected/93fb0ef8-1685-4951-b2db-a0921787ff1a-kube-api-access-m6xgb\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780686 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780728 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-config\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.780756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.781752 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.782074 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fb0ef8-1685-4951-b2db-a0921787ff1a-config\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.788810 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.789806 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.803838 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77"] Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.810459 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/93fb0ef8-1685-4951-b2db-a0921787ff1a-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.827280 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xgb\" (UniqueName: \"kubernetes.io/projected/93fb0ef8-1685-4951-b2db-a0921787ff1a-kube-api-access-m6xgb\") pod \"logging-loki-querier-5895d59bb8-zd9gb\" (UID: \"93fb0ef8-1685-4951-b2db-a0921787ff1a\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.861126 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.881815 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-rbac\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9wz\" (UniqueName: \"kubernetes.io/projected/0c5c39d2-3a23-41c1-bec3-317ca97022f4-kube-api-access-vf9wz\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882329 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-rbac\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-lokistack-gateway\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-tls-secret\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882417 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-lokistack-gateway\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882442 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882484 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-tenants\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882502 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-495t5\" (UniqueName: \"kubernetes.io/projected/fafeb02f-1864-4678-9846-d799fa1bc3c4-kube-api-access-495t5\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882522 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882566 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-config\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882595 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-tls-secret\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882611 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882681 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-tenants\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882702 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882721 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882770 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf77g\" (UniqueName: \"kubernetes.io/projected/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-kube-api-access-hf77g\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882808 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882848 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.882870 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.886353 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-config\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.892526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.894064 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.896293 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.903367 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf77g\" (UniqueName: \"kubernetes.io/projected/a74cf7e4-1b6c-4547-bb31-eea3e99e49a1-kube-api-access-hf77g\") pod \"logging-loki-query-frontend-84558f7c9f-dlmmw\" (UID: \"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.949402 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.983918 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-rbac\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.983959 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf9wz\" (UniqueName: \"kubernetes.io/projected/0c5c39d2-3a23-41c1-bec3-317ca97022f4-kube-api-access-vf9wz\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.983978 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-rbac\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984004 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-lokistack-gateway\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-tls-secret\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984041 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-lokistack-gateway\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-tenants\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984075 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-495t5\" (UniqueName: \"kubernetes.io/projected/fafeb02f-1864-4678-9846-d799fa1bc3c4-kube-api-access-495t5\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984094 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984122 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-tls-secret\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984163 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-tenants\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984221 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.984244 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.985069 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.985694 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-rbac\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.986586 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-rbac\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.987430 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-lokistack-gateway\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.989285 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.990841 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.992981 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.993012 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/fafeb02f-1864-4678-9846-d799fa1bc3c4-lokistack-gateway\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.993619 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:05 crc kubenswrapper[4904]: I1121 13:46:05.993797 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-tenants\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.007379 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-tls-secret\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.009183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-tenants\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.009688 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c5c39d2-3a23-41c1-bec3-317ca97022f4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.011147 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/fafeb02f-1864-4678-9846-d799fa1bc3c4-tls-secret\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.012072 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf9wz\" (UniqueName: \"kubernetes.io/projected/0c5c39d2-3a23-41c1-bec3-317ca97022f4-kube-api-access-vf9wz\") pod \"logging-loki-gateway-5bc6fc7c85-zrb77\" (UID: \"0c5c39d2-3a23-41c1-bec3-317ca97022f4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.015372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-495t5\" (UniqueName: \"kubernetes.io/projected/fafeb02f-1864-4678-9846-d799fa1bc3c4-kube-api-access-495t5\") pod \"logging-loki-gateway-5bc6fc7c85-psz2n\" (UID: \"fafeb02f-1864-4678-9846-d799fa1bc3c4\") " pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.022798 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-6rt82"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.061820 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.124305 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-zd9gb"] Nov 21 13:46:06 crc kubenswrapper[4904]: W1121 13:46:06.137875 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93fb0ef8_1685_4951_b2db_a0921787ff1a.slice/crio-7c210b3deeab9a15b4cd62370657f9628f3d824a41cea6723dde1035b3b68b36 WatchSource:0}: Error finding container 7c210b3deeab9a15b4cd62370657f9628f3d824a41cea6723dde1035b3b68b36: Status 404 returned error can't find the container with id 7c210b3deeab9a15b4cd62370657f9628f3d824a41cea6723dde1035b3b68b36 Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.165022 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.353005 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n"] Nov 21 13:46:06 crc kubenswrapper[4904]: W1121 13:46:06.357356 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfafeb02f_1864_4678_9846_d799fa1bc3c4.slice/crio-da1119c5e3e1071abf07ad1ad40e37b7b215d082fb29148eb6e7b458464a6324 WatchSource:0}: Error finding container da1119c5e3e1071abf07ad1ad40e37b7b215d082fb29148eb6e7b458464a6324: Status 404 returned error can't find the container with id da1119c5e3e1071abf07ad1ad40e37b7b215d082fb29148eb6e7b458464a6324 Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.393191 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw"] Nov 21 13:46:06 crc kubenswrapper[4904]: W1121 13:46:06.399450 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda74cf7e4_1b6c_4547_bb31_eea3e99e49a1.slice/crio-66130f41751433198c0927b65a46810231006dcd33d984142e7afe5f53dd5414 WatchSource:0}: Error finding container 66130f41751433198c0927b65a46810231006dcd33d984142e7afe5f53dd5414: Status 404 returned error can't find the container with id 66130f41751433198c0927b65a46810231006dcd33d984142e7afe5f53dd5414 Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.566070 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.568080 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.579528 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.614576 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.615006 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.626316 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.627179 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.629194 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.630302 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.660045 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.675477 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715216 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715287 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715317 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715345 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xsqt\" (UniqueName: \"kubernetes.io/projected/fb578f45-397a-422e-af3c-04841227ef67-kube-api-access-6xsqt\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715384 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715405 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715465 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715488 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvpmw\" (UniqueName: \"kubernetes.io/projected/4177ccdd-88ab-419a-a189-2f1af9b587e3-kube-api-access-tvpmw\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-605a8af9-532a-41a7-b664-5166b3aef87e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-605a8af9-532a-41a7-b664-5166b3aef87e\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715541 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715565 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb578f45-397a-422e-af3c-04841227ef67-config\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715590 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4177ccdd-88ab-419a-a189-2f1af9b587e3-config\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715608 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.715629 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.723952 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" event={"ID":"93fb0ef8-1685-4951-b2db-a0921787ff1a","Type":"ContainerStarted","Data":"7c210b3deeab9a15b4cd62370657f9628f3d824a41cea6723dde1035b3b68b36"} Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.725703 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" event={"ID":"0c5c39d2-3a23-41c1-bec3-317ca97022f4","Type":"ContainerStarted","Data":"a23eead60ba52a8dbb0e2fde7449fe6b08ab82cdd14aa970f59b697d815ed301"} Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.727837 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" event={"ID":"e4e29add-5250-42af-af50-40efeec82a2d","Type":"ContainerStarted","Data":"2486f81901e1dc325adc100173889cb14342bf801dae6fc283e592b4221d031d"} Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.729410 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" event={"ID":"fafeb02f-1864-4678-9846-d799fa1bc3c4","Type":"ContainerStarted","Data":"da1119c5e3e1071abf07ad1ad40e37b7b215d082fb29148eb6e7b458464a6324"} Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.730381 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" event={"ID":"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1","Type":"ContainerStarted","Data":"66130f41751433198c0927b65a46810231006dcd33d984142e7afe5f53dd5414"} Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.731467 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.732681 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.735591 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.737302 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.742354 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.817507 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.817563 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.817931 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.817981 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.818004 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.819080 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.819184 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.819997 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xsqt\" (UniqueName: \"kubernetes.io/projected/fb578f45-397a-422e-af3c-04841227ef67-kube-api-access-6xsqt\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820080 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820141 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820185 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820202 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820224 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820248 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820270 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvpmw\" (UniqueName: \"kubernetes.io/projected/4177ccdd-88ab-419a-a189-2f1af9b587e3-kube-api-access-tvpmw\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.820288 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-605a8af9-532a-41a7-b664-5166b3aef87e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-605a8af9-532a-41a7-b664-5166b3aef87e\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.821226 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v745n\" (UniqueName: \"kubernetes.io/projected/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-kube-api-access-v745n\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.821271 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.821312 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.821348 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb578f45-397a-422e-af3c-04841227ef67-config\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.821407 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4177ccdd-88ab-419a-a189-2f1af9b587e3-config\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.821436 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.822844 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4177ccdd-88ab-419a-a189-2f1af9b587e3-config\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.822871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb578f45-397a-422e-af3c-04841227ef67-config\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.822862 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.822970 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.823001 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ec67e4ad2fb738bfab1c7890451d32a7f4d61010b1f22179859152bff7bad333/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.825854 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.826030 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.826076 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.826119 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6b18c4a508f4f563ff95a05461aaf07930b4ee5a1f1632e0e9efcd271e05b747/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.826809 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.826838 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-605a8af9-532a-41a7-b664-5166b3aef87e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-605a8af9-532a-41a7-b664-5166b3aef87e\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c6821840d28e20b5c4171d5ff848a745ffa4ffa2fabb5855fbbc6fa37c5f4741/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.827983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/fb578f45-397a-422e-af3c-04841227ef67-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.828331 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.834407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.835524 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4177ccdd-88ab-419a-a189-2f1af9b587e3-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.840730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xsqt\" (UniqueName: \"kubernetes.io/projected/fb578f45-397a-422e-af3c-04841227ef67-kube-api-access-6xsqt\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.841099 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvpmw\" (UniqueName: \"kubernetes.io/projected/4177ccdd-88ab-419a-a189-2f1af9b587e3-kube-api-access-tvpmw\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.859470 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-9a22ba63-eedd-4823-b6a6-6bd7575fa641\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.860863 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1c86bda2-49e2-452e-802e-fe8e9390c9f1\") pod \"logging-loki-ingester-0\" (UID: \"fb578f45-397a-422e-af3c-04841227ef67\") " pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.861241 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-605a8af9-532a-41a7-b664-5166b3aef87e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-605a8af9-532a-41a7-b664-5166b3aef87e\") pod \"logging-loki-compactor-0\" (UID: \"4177ccdd-88ab-419a-a189-2f1af9b587e3\") " pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922745 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922811 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922836 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922910 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v745n\" (UniqueName: \"kubernetes.io/projected/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-kube-api-access-v745n\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922930 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.922969 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.923992 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-config\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.924273 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.927333 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.927368 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07bc0954cb3d4a93e42a44eb1a624f06d01c0b17dcdb88351ff6471f6d2c952d/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.928353 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.928377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.930888 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.940699 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v745n\" (UniqueName: \"kubernetes.io/projected/4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6-kube-api-access-v745n\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.941377 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.949644 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:06 crc kubenswrapper[4904]: I1121 13:46:06.961803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86606301-4770-43fb-bc1e-0034fd6c25d3\") pod \"logging-loki-index-gateway-0\" (UID: \"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.059295 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.376496 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 21 13:46:07 crc kubenswrapper[4904]: W1121 13:46:07.382617 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb578f45_397a_422e_af3c_04841227ef67.slice/crio-cc2026d2c8f3a6993221a81749bae37d0730a676261c73d9aaefcccdac5bc269 WatchSource:0}: Error finding container cc2026d2c8f3a6993221a81749bae37d0730a676261c73d9aaefcccdac5bc269: Status 404 returned error can't find the container with id cc2026d2c8f3a6993221a81749bae37d0730a676261c73d9aaefcccdac5bc269 Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.473293 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 21 13:46:07 crc kubenswrapper[4904]: W1121 13:46:07.481618 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4177ccdd_88ab_419a_a189_2f1af9b587e3.slice/crio-5cb1667ac98b2c50c124a7a0da7bb2582b5a0ebe87cb9cfe33a24b63ab973847 WatchSource:0}: Error finding container 5cb1667ac98b2c50c124a7a0da7bb2582b5a0ebe87cb9cfe33a24b63ab973847: Status 404 returned error can't find the container with id 5cb1667ac98b2c50c124a7a0da7bb2582b5a0ebe87cb9cfe33a24b63ab973847 Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.553392 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.740122 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6","Type":"ContainerStarted","Data":"b807b298e465ea556ea1bf9f27d3a79642244a98d5f3f6ed82fdbb7fa398277d"} Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.741609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"fb578f45-397a-422e-af3c-04841227ef67","Type":"ContainerStarted","Data":"cc2026d2c8f3a6993221a81749bae37d0730a676261c73d9aaefcccdac5bc269"} Nov 21 13:46:07 crc kubenswrapper[4904]: I1121 13:46:07.743402 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"4177ccdd-88ab-419a-a189-2f1af9b587e3","Type":"ContainerStarted","Data":"5cb1667ac98b2c50c124a7a0da7bb2582b5a0ebe87cb9cfe33a24b63ab973847"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.797033 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" event={"ID":"a74cf7e4-1b6c-4547-bb31-eea3e99e49a1","Type":"ContainerStarted","Data":"a126f59149472c5216cff735419b3cef8a4a5749d5740b3938636049cdb8767f"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.798000 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.799182 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"fb578f45-397a-422e-af3c-04841227ef67","Type":"ContainerStarted","Data":"2523626d4a41cbe737eb6a98901a1800e71ed6ab70ba6a173578efd53b415b68"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.799280 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.801035 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" event={"ID":"93fb0ef8-1685-4951-b2db-a0921787ff1a","Type":"ContainerStarted","Data":"ac14f3eda57cbcf5906dfb2b74ae413c9afa9eeee3ff92279cb9581536ec66ac"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.801098 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.802754 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" event={"ID":"0c5c39d2-3a23-41c1-bec3-317ca97022f4","Type":"ContainerStarted","Data":"d1bcbf7eda7afddd5d8611b01eea0d79844902d72fc86dd83fdfb326b3b4dfe7"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.804160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" event={"ID":"e4e29add-5250-42af-af50-40efeec82a2d","Type":"ContainerStarted","Data":"21fd4640050fb75e61045ceeeaf62bd247a6cc335b54c31ecdfda504394fee92"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.804265 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.805691 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"4177ccdd-88ab-419a-a189-2f1af9b587e3","Type":"ContainerStarted","Data":"2598096a31937b88e68562179811a709f3968c0ba705c91a29c2b5883e9d2289"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.805757 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.807124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6","Type":"ContainerStarted","Data":"ce82ab7711eb4b17e8dec8eb82d1eeb1a30d159d9cb18d1eb8f607d1f4124347"} Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.807373 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.822738 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" podStartSLOduration=2.070959214 podStartE2EDuration="6.822721228s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:06.401415072 +0000 UTC m=+840.522947624" lastFinishedPulling="2025-11-21 13:46:11.153177086 +0000 UTC m=+845.274709638" observedRunningTime="2025-11-21 13:46:11.819114708 +0000 UTC m=+845.940647260" watchObservedRunningTime="2025-11-21 13:46:11.822721228 +0000 UTC m=+845.944253780" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.842145 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" podStartSLOduration=1.70367195 podStartE2EDuration="6.842126697s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:06.036377506 +0000 UTC m=+840.157910058" lastFinishedPulling="2025-11-21 13:46:11.174832253 +0000 UTC m=+845.296364805" observedRunningTime="2025-11-21 13:46:11.839126693 +0000 UTC m=+845.960659245" watchObservedRunningTime="2025-11-21 13:46:11.842126697 +0000 UTC m=+845.963659249" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.863748 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.284157542 podStartE2EDuration="6.863721509s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:07.57363544 +0000 UTC m=+841.695167992" lastFinishedPulling="2025-11-21 13:46:11.153199407 +0000 UTC m=+845.274731959" observedRunningTime="2025-11-21 13:46:11.861339441 +0000 UTC m=+845.982872013" watchObservedRunningTime="2025-11-21 13:46:11.863721509 +0000 UTC m=+845.985254061" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.889816 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.235070081 podStartE2EDuration="6.889798903s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:07.484096679 +0000 UTC m=+841.605629231" lastFinishedPulling="2025-11-21 13:46:11.138825501 +0000 UTC m=+845.260358053" observedRunningTime="2025-11-21 13:46:11.884464341 +0000 UTC m=+846.005996903" watchObservedRunningTime="2025-11-21 13:46:11.889798903 +0000 UTC m=+846.011331455" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.916462 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.011316428 podStartE2EDuration="6.9164393s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:07.384050728 +0000 UTC m=+841.505583280" lastFinishedPulling="2025-11-21 13:46:11.2891736 +0000 UTC m=+845.410706152" observedRunningTime="2025-11-21 13:46:11.915841175 +0000 UTC m=+846.037373727" watchObservedRunningTime="2025-11-21 13:46:11.9164393 +0000 UTC m=+846.037971852" Nov 21 13:46:11 crc kubenswrapper[4904]: I1121 13:46:11.935153 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" podStartSLOduration=1.984748279 podStartE2EDuration="6.935113741s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:06.142880538 +0000 UTC m=+840.264413090" lastFinishedPulling="2025-11-21 13:46:11.093246 +0000 UTC m=+845.214778552" observedRunningTime="2025-11-21 13:46:11.934084225 +0000 UTC m=+846.055616777" watchObservedRunningTime="2025-11-21 13:46:11.935113741 +0000 UTC m=+846.056646323" Nov 21 13:46:12 crc kubenswrapper[4904]: I1121 13:46:12.817239 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" event={"ID":"fafeb02f-1864-4678-9846-d799fa1bc3c4","Type":"ContainerStarted","Data":"4075e7da7dfb672cf45b440b39c9345c0677947c960e2f9d2c0ef13fe90e5370"} Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.834840 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" event={"ID":"fafeb02f-1864-4678-9846-d799fa1bc3c4","Type":"ContainerStarted","Data":"a27e6fa98ca3d310cf80406e31ada5c5506d26603bd10a8b52e4eec155ab6f1e"} Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.835568 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.835756 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.840101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" event={"ID":"0c5c39d2-3a23-41c1-bec3-317ca97022f4","Type":"ContainerStarted","Data":"bffde598afc7d9b58a876a10493430ce7542967434b3a83466657e361ce9484e"} Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.840336 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.847975 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.850981 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.854282 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.861407 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-psz2n" podStartSLOduration=2.482552741 podStartE2EDuration="9.861387322s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:06.362507946 +0000 UTC m=+840.484040498" lastFinishedPulling="2025-11-21 13:46:13.741342527 +0000 UTC m=+847.862875079" observedRunningTime="2025-11-21 13:46:14.858206894 +0000 UTC m=+848.979739446" watchObservedRunningTime="2025-11-21 13:46:14.861387322 +0000 UTC m=+848.982919874" Nov 21 13:46:14 crc kubenswrapper[4904]: I1121 13:46:14.941378 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" podStartSLOduration=2.89575733 podStartE2EDuration="9.941361035s" podCreationTimestamp="2025-11-21 13:46:05 +0000 UTC" firstStartedPulling="2025-11-21 13:46:06.691267322 +0000 UTC m=+840.812799874" lastFinishedPulling="2025-11-21 13:46:13.736871017 +0000 UTC m=+847.858403579" observedRunningTime="2025-11-21 13:46:14.940701559 +0000 UTC m=+849.062234131" watchObservedRunningTime="2025-11-21 13:46:14.941361035 +0000 UTC m=+849.062893587" Nov 21 13:46:15 crc kubenswrapper[4904]: I1121 13:46:15.850885 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:15 crc kubenswrapper[4904]: I1121 13:46:15.864905 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-5bc6fc7c85-zrb77" Nov 21 13:46:26 crc kubenswrapper[4904]: I1121 13:46:26.949159 4904 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 21 13:46:26 crc kubenswrapper[4904]: I1121 13:46:26.949825 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fb578f45-397a-422e-af3c-04841227ef67" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:46:26 crc kubenswrapper[4904]: I1121 13:46:26.956137 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 21 13:46:27 crc kubenswrapper[4904]: I1121 13:46:27.075846 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 21 13:46:35 crc kubenswrapper[4904]: I1121 13:46:35.767402 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-6rt82" Nov 21 13:46:35 crc kubenswrapper[4904]: I1121 13:46:35.873065 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-zd9gb" Nov 21 13:46:35 crc kubenswrapper[4904]: I1121 13:46:35.959238 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-dlmmw" Nov 21 13:46:36 crc kubenswrapper[4904]: I1121 13:46:36.947276 4904 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 21 13:46:36 crc kubenswrapper[4904]: I1121 13:46:36.947835 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fb578f45-397a-422e-af3c-04841227ef67" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:46:46 crc kubenswrapper[4904]: I1121 13:46:46.949009 4904 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 21 13:46:46 crc kubenswrapper[4904]: I1121 13:46:46.950090 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fb578f45-397a-422e-af3c-04841227ef67" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:46:56 crc kubenswrapper[4904]: I1121 13:46:56.947317 4904 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 21 13:46:56 crc kubenswrapper[4904]: I1121 13:46:56.948452 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="fb578f45-397a-422e-af3c-04841227ef67" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:46:58 crc kubenswrapper[4904]: I1121 13:46:58.114367 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:46:58 crc kubenswrapper[4904]: I1121 13:46:58.114473 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:47:06 crc kubenswrapper[4904]: I1121 13:47:06.950017 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.296998 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-4drxq"] Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.301852 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.307643 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.312042 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qs2jh" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.312041 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.312452 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.312556 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.320175 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4drxq"] Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.329626 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.372334 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-4drxq"] Nov 21 13:47:26 crc kubenswrapper[4904]: E1121 13:47:26.372804 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-nlfrt metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-nlfrt metrics sa-token tmp trusted-ca]: context canceled" pod="openshift-logging/collector-4drxq" podUID="f0657620-ab4b-40e8-8bf6-80a038aa5974" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442089 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0657620-ab4b-40e8-8bf6-80a038aa5974-tmp\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442131 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f0657620-ab4b-40e8-8bf6-80a038aa5974-datadir\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442374 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlfrt\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-kube-api-access-nlfrt\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442550 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-entrypoint\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442600 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-token\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-sa-token\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442642 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config-openshift-service-cacrt\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442689 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-trusted-ca\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442807 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-metrics\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442919 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-syslog-receiver\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.442956 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.448582 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.455667 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544382 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-entrypoint\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544474 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-token\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544513 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-sa-token\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config-openshift-service-cacrt\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544606 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-trusted-ca\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544640 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-metrics\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544699 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-syslog-receiver\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544735 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544772 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0657620-ab4b-40e8-8bf6-80a038aa5974-tmp\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544813 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f0657620-ab4b-40e8-8bf6-80a038aa5974-datadir\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.544852 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlfrt\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-kube-api-access-nlfrt\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.546230 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-trusted-ca\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.546362 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-entrypoint\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.546601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.546718 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f0657620-ab4b-40e8-8bf6-80a038aa5974-datadir\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.546738 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config-openshift-service-cacrt\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.550562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0657620-ab4b-40e8-8bf6-80a038aa5974-tmp\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.551087 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-syslog-receiver\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.551503 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-metrics\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.553191 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-token\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.568048 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-sa-token\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.574871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlfrt\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-kube-api-access-nlfrt\") pod \"collector-4drxq\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " pod="openshift-logging/collector-4drxq" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.645581 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-entrypoint\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.645715 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlfrt\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-kube-api-access-nlfrt\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.645748 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config-openshift-service-cacrt\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.645808 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-sa-token\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.645837 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-syslog-receiver\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646054 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646521 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-trusted-ca\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646593 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646374 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646634 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0657620-ab4b-40e8-8bf6-80a038aa5974-tmp\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646685 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f0657620-ab4b-40e8-8bf6-80a038aa5974-datadir\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646724 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-metrics\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646751 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-token\") pod \"f0657620-ab4b-40e8-8bf6-80a038aa5974\" (UID: \"f0657620-ab4b-40e8-8bf6-80a038aa5974\") " Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.646981 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config" (OuterVolumeSpecName: "config") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.647180 4904 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.647205 4904 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.647217 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.647844 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.647884 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0657620-ab4b-40e8-8bf6-80a038aa5974-datadir" (OuterVolumeSpecName: "datadir") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.649373 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-kube-api-access-nlfrt" (OuterVolumeSpecName: "kube-api-access-nlfrt") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "kube-api-access-nlfrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.651003 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-token" (OuterVolumeSpecName: "collector-token") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.651084 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0657620-ab4b-40e8-8bf6-80a038aa5974-tmp" (OuterVolumeSpecName: "tmp") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.651386 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-metrics" (OuterVolumeSpecName: "metrics") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.652734 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-sa-token" (OuterVolumeSpecName: "sa-token") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.654997 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "f0657620-ab4b-40e8-8bf6-80a038aa5974" (UID: "f0657620-ab4b-40e8-8bf6-80a038aa5974"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749289 4904 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f0657620-ab4b-40e8-8bf6-80a038aa5974-datadir\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749341 4904 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-metrics\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749355 4904 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749372 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlfrt\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-kube-api-access-nlfrt\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749386 4904 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f0657620-ab4b-40e8-8bf6-80a038aa5974-sa-token\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749400 4904 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f0657620-ab4b-40e8-8bf6-80a038aa5974-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749414 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0657620-ab4b-40e8-8bf6-80a038aa5974-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:26 crc kubenswrapper[4904]: I1121 13:47:26.749425 4904 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f0657620-ab4b-40e8-8bf6-80a038aa5974-tmp\") on node \"crc\" DevicePath \"\"" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.455167 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4drxq" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.506490 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-4drxq"] Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.515007 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-4drxq"] Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.534627 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-mz46k"] Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.535630 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.539737 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.539950 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-qs2jh" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.540009 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.540337 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.542969 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.552686 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mz46k"] Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.556417 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669735 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-collector-token\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669793 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-metrics\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669845 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-entrypoint\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669860 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8zl\" (UniqueName: \"kubernetes.io/projected/a982be9e-6e45-4a41-b736-9a673acaf3c0-kube-api-access-9q8zl\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669893 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-config-openshift-service-cacrt\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669925 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-trusted-ca\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.669947 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a982be9e-6e45-4a41-b736-9a673acaf3c0-tmp\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.670305 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a982be9e-6e45-4a41-b736-9a673acaf3c0-sa-token\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.670532 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-collector-syslog-receiver\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.670617 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-config\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.670701 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a982be9e-6e45-4a41-b736-9a673acaf3c0-datadir\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.772830 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-collector-token\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.772892 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-metrics\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.772942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8zl\" (UniqueName: \"kubernetes.io/projected/a982be9e-6e45-4a41-b736-9a673acaf3c0-kube-api-access-9q8zl\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.772965 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-entrypoint\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.772991 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-config-openshift-service-cacrt\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773016 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-trusted-ca\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773043 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a982be9e-6e45-4a41-b736-9a673acaf3c0-tmp\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773067 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a982be9e-6e45-4a41-b736-9a673acaf3c0-sa-token\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-collector-syslog-receiver\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773133 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-config\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773161 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a982be9e-6e45-4a41-b736-9a673acaf3c0-datadir\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.773258 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a982be9e-6e45-4a41-b736-9a673acaf3c0-datadir\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.774386 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-config\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.774815 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-config-openshift-service-cacrt\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.775288 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-entrypoint\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.775560 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a982be9e-6e45-4a41-b736-9a673acaf3c0-trusted-ca\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.777633 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-metrics\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.777768 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-collector-token\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.778149 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a982be9e-6e45-4a41-b736-9a673acaf3c0-collector-syslog-receiver\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.780797 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a982be9e-6e45-4a41-b736-9a673acaf3c0-tmp\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.792838 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a982be9e-6e45-4a41-b736-9a673acaf3c0-sa-token\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.802589 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8zl\" (UniqueName: \"kubernetes.io/projected/a982be9e-6e45-4a41-b736-9a673acaf3c0-kube-api-access-9q8zl\") pod \"collector-mz46k\" (UID: \"a982be9e-6e45-4a41-b736-9a673acaf3c0\") " pod="openshift-logging/collector-mz46k" Nov 21 13:47:27 crc kubenswrapper[4904]: I1121 13:47:27.861421 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mz46k" Nov 21 13:47:28 crc kubenswrapper[4904]: I1121 13:47:28.113279 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:47:28 crc kubenswrapper[4904]: I1121 13:47:28.113330 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:47:28 crc kubenswrapper[4904]: I1121 13:47:28.368383 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mz46k"] Nov 21 13:47:28 crc kubenswrapper[4904]: I1121 13:47:28.462236 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-mz46k" event={"ID":"a982be9e-6e45-4a41-b736-9a673acaf3c0","Type":"ContainerStarted","Data":"df2071f21addc5e4a17484440ba4513974da088f6889a3b98d273f0ec3778f9f"} Nov 21 13:47:28 crc kubenswrapper[4904]: I1121 13:47:28.524971 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0657620-ab4b-40e8-8bf6-80a038aa5974" path="/var/lib/kubelet/pods/f0657620-ab4b-40e8-8bf6-80a038aa5974/volumes" Nov 21 13:47:34 crc kubenswrapper[4904]: I1121 13:47:34.521908 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-mz46k" event={"ID":"a982be9e-6e45-4a41-b736-9a673acaf3c0","Type":"ContainerStarted","Data":"d97d8ebc5f9c4f30115e90e34fb58a5a52d7971a9a1cd00d992c34d906151f89"} Nov 21 13:47:34 crc kubenswrapper[4904]: I1121 13:47:34.565388 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-mz46k" podStartSLOduration=1.690589676 podStartE2EDuration="7.56535234s" podCreationTimestamp="2025-11-21 13:47:27 +0000 UTC" firstStartedPulling="2025-11-21 13:47:28.38211641 +0000 UTC m=+922.503649002" lastFinishedPulling="2025-11-21 13:47:34.256879114 +0000 UTC m=+928.378411666" observedRunningTime="2025-11-21 13:47:34.553520808 +0000 UTC m=+928.675053400" watchObservedRunningTime="2025-11-21 13:47:34.56535234 +0000 UTC m=+928.686884922" Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.113730 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.114275 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.114331 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.114886 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be6a3c0a99c9505797540ba9588fe9f6a753a8471c941586b86762324c656b9e"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.114949 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://be6a3c0a99c9505797540ba9588fe9f6a753a8471c941586b86762324c656b9e" gracePeriod=600 Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.717580 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="be6a3c0a99c9505797540ba9588fe9f6a753a8471c941586b86762324c656b9e" exitCode=0 Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.717696 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"be6a3c0a99c9505797540ba9588fe9f6a753a8471c941586b86762324c656b9e"} Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.718214 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"d7fd2b100d2e7ad73c083b4c79b52506999f9dce6592051de1411c6354bd5da0"} Nov 21 13:47:58 crc kubenswrapper[4904]: I1121 13:47:58.718257 4904 scope.go:117] "RemoveContainer" containerID="34e69668c2ea09c1fa2e0c0fbb6545e8671e2e8710d41559fb5aa9d7200b9106" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.296483 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p"] Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.298220 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.300844 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.320387 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p"] Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.424501 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.424551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxr8\" (UniqueName: \"kubernetes.io/projected/f09e9308-5735-4731-968d-e6b202d380d8-kube-api-access-djxr8\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.424687 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.526095 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.526150 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxr8\" (UniqueName: \"kubernetes.io/projected/f09e9308-5735-4731-968d-e6b202d380d8-kube-api-access-djxr8\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.526249 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.526593 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.526747 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.545265 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxr8\" (UniqueName: \"kubernetes.io/projected/f09e9308-5735-4731-968d-e6b202d380d8-kube-api-access-djxr8\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:47:59 crc kubenswrapper[4904]: I1121 13:47:59.615449 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:48:00 crc kubenswrapper[4904]: I1121 13:48:00.086951 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p"] Nov 21 13:48:00 crc kubenswrapper[4904]: I1121 13:48:00.741952 4904 generic.go:334] "Generic (PLEG): container finished" podID="f09e9308-5735-4731-968d-e6b202d380d8" containerID="bbfbf71a3de46fffb4a2a611c0e4779da96cd955be898ac155ce6622027057c5" exitCode=0 Nov 21 13:48:00 crc kubenswrapper[4904]: I1121 13:48:00.742001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" event={"ID":"f09e9308-5735-4731-968d-e6b202d380d8","Type":"ContainerDied","Data":"bbfbf71a3de46fffb4a2a611c0e4779da96cd955be898ac155ce6622027057c5"} Nov 21 13:48:00 crc kubenswrapper[4904]: I1121 13:48:00.742053 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" event={"ID":"f09e9308-5735-4731-968d-e6b202d380d8","Type":"ContainerStarted","Data":"fa997c577f908450e7fa77ed6e780c8bb8c4740297a3b59611f33afe2c6d31b0"} Nov 21 13:48:02 crc kubenswrapper[4904]: I1121 13:48:02.760030 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" event={"ID":"f09e9308-5735-4731-968d-e6b202d380d8","Type":"ContainerStarted","Data":"a5d496ab1dec85fc72f7403bc85c5b7c201ea32569dc765ecb83b746ab3d714d"} Nov 21 13:48:03 crc kubenswrapper[4904]: I1121 13:48:03.771350 4904 generic.go:334] "Generic (PLEG): container finished" podID="f09e9308-5735-4731-968d-e6b202d380d8" containerID="a5d496ab1dec85fc72f7403bc85c5b7c201ea32569dc765ecb83b746ab3d714d" exitCode=0 Nov 21 13:48:03 crc kubenswrapper[4904]: I1121 13:48:03.771439 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" event={"ID":"f09e9308-5735-4731-968d-e6b202d380d8","Type":"ContainerDied","Data":"a5d496ab1dec85fc72f7403bc85c5b7c201ea32569dc765ecb83b746ab3d714d"} Nov 21 13:48:04 crc kubenswrapper[4904]: I1121 13:48:04.780058 4904 generic.go:334] "Generic (PLEG): container finished" podID="f09e9308-5735-4731-968d-e6b202d380d8" containerID="aac53e8a69b875965b978401759075a8af95a298bbccb33ca95b12bf3e12f054" exitCode=0 Nov 21 13:48:04 crc kubenswrapper[4904]: I1121 13:48:04.780146 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" event={"ID":"f09e9308-5735-4731-968d-e6b202d380d8","Type":"ContainerDied","Data":"aac53e8a69b875965b978401759075a8af95a298bbccb33ca95b12bf3e12f054"} Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.098981 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.282337 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxr8\" (UniqueName: \"kubernetes.io/projected/f09e9308-5735-4731-968d-e6b202d380d8-kube-api-access-djxr8\") pod \"f09e9308-5735-4731-968d-e6b202d380d8\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.282731 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-util\") pod \"f09e9308-5735-4731-968d-e6b202d380d8\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.282794 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-bundle\") pod \"f09e9308-5735-4731-968d-e6b202d380d8\" (UID: \"f09e9308-5735-4731-968d-e6b202d380d8\") " Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.283547 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-bundle" (OuterVolumeSpecName: "bundle") pod "f09e9308-5735-4731-968d-e6b202d380d8" (UID: "f09e9308-5735-4731-968d-e6b202d380d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.293332 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-util" (OuterVolumeSpecName: "util") pod "f09e9308-5735-4731-968d-e6b202d380d8" (UID: "f09e9308-5735-4731-968d-e6b202d380d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.294900 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09e9308-5735-4731-968d-e6b202d380d8-kube-api-access-djxr8" (OuterVolumeSpecName: "kube-api-access-djxr8") pod "f09e9308-5735-4731-968d-e6b202d380d8" (UID: "f09e9308-5735-4731-968d-e6b202d380d8"). InnerVolumeSpecName "kube-api-access-djxr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.384924 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.384978 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f09e9308-5735-4731-968d-e6b202d380d8-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.384988 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxr8\" (UniqueName: \"kubernetes.io/projected/f09e9308-5735-4731-968d-e6b202d380d8-kube-api-access-djxr8\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.800751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" event={"ID":"f09e9308-5735-4731-968d-e6b202d380d8","Type":"ContainerDied","Data":"fa997c577f908450e7fa77ed6e780c8bb8c4740297a3b59611f33afe2c6d31b0"} Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.800848 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa997c577f908450e7fa77ed6e780c8bb8c4740297a3b59611f33afe2c6d31b0" Nov 21 13:48:06 crc kubenswrapper[4904]: I1121 13:48:06.800877 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.187836 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ql6sj"] Nov 21 13:48:11 crc kubenswrapper[4904]: E1121 13:48:11.189106 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="extract" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.189127 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="extract" Nov 21 13:48:11 crc kubenswrapper[4904]: E1121 13:48:11.189149 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="pull" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.189157 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="pull" Nov 21 13:48:11 crc kubenswrapper[4904]: E1121 13:48:11.189174 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="util" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.189181 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="util" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.189351 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09e9308-5735-4731-968d-e6b202d380d8" containerName="extract" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.189985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.193779 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-hf5sr" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.193819 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.194201 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.201824 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ql6sj"] Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.271958 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28l8\" (UniqueName: \"kubernetes.io/projected/3b9ad4cd-55bf-41e4-8740-b88b8be917c5-kube-api-access-s28l8\") pod \"nmstate-operator-557fdffb88-ql6sj\" (UID: \"3b9ad4cd-55bf-41e4-8740-b88b8be917c5\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.372771 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28l8\" (UniqueName: \"kubernetes.io/projected/3b9ad4cd-55bf-41e4-8740-b88b8be917c5-kube-api-access-s28l8\") pod \"nmstate-operator-557fdffb88-ql6sj\" (UID: \"3b9ad4cd-55bf-41e4-8740-b88b8be917c5\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.395095 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28l8\" (UniqueName: \"kubernetes.io/projected/3b9ad4cd-55bf-41e4-8740-b88b8be917c5-kube-api-access-s28l8\") pod \"nmstate-operator-557fdffb88-ql6sj\" (UID: \"3b9ad4cd-55bf-41e4-8740-b88b8be917c5\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" Nov 21 13:48:11 crc kubenswrapper[4904]: I1121 13:48:11.508443 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" Nov 21 13:48:12 crc kubenswrapper[4904]: I1121 13:48:12.936697 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ql6sj"] Nov 21 13:48:12 crc kubenswrapper[4904]: W1121 13:48:12.942080 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b9ad4cd_55bf_41e4_8740_b88b8be917c5.slice/crio-6f3ffb95da2ddf868262f88330625433da60fc961b40f2c002d492635ef780be WatchSource:0}: Error finding container 6f3ffb95da2ddf868262f88330625433da60fc961b40f2c002d492635ef780be: Status 404 returned error can't find the container with id 6f3ffb95da2ddf868262f88330625433da60fc961b40f2c002d492635ef780be Nov 21 13:48:13 crc kubenswrapper[4904]: I1121 13:48:13.910531 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" event={"ID":"3b9ad4cd-55bf-41e4-8740-b88b8be917c5","Type":"ContainerStarted","Data":"6f3ffb95da2ddf868262f88330625433da60fc961b40f2c002d492635ef780be"} Nov 21 13:48:15 crc kubenswrapper[4904]: I1121 13:48:15.931396 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" event={"ID":"3b9ad4cd-55bf-41e4-8740-b88b8be917c5","Type":"ContainerStarted","Data":"0454200f4a2857cdd78498c16e410ae0f0e17c8d82acd0ade3230a9819326ec2"} Nov 21 13:48:15 crc kubenswrapper[4904]: I1121 13:48:15.958102 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-ql6sj" podStartSLOduration=2.985835166 podStartE2EDuration="4.958074589s" podCreationTimestamp="2025-11-21 13:48:11 +0000 UTC" firstStartedPulling="2025-11-21 13:48:12.945110913 +0000 UTC m=+967.066643465" lastFinishedPulling="2025-11-21 13:48:14.917350336 +0000 UTC m=+969.038882888" observedRunningTime="2025-11-21 13:48:15.952411459 +0000 UTC m=+970.073944011" watchObservedRunningTime="2025-11-21 13:48:15.958074589 +0000 UTC m=+970.079607171" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.421981 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.424777 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.429761 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.431100 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.436855 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.443562 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-64t2p" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.451649 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.457775 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.485870 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-m899x"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.487078 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.581018 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.582331 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.584790 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qw2vl" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.585155 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.586311 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.613964 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-nmstate-lock\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.614470 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22fzv\" (UniqueName: \"kubernetes.io/projected/ae90deaf-c915-46ad-b731-7b79e746fffa-kube-api-access-22fzv\") pod \"nmstate-metrics-5dcf9c57c5-49ws5\" (UID: \"ae90deaf-c915-46ad-b731-7b79e746fffa\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.614587 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-ovs-socket\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.614748 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rp4\" (UniqueName: \"kubernetes.io/projected/d39cd226-e819-4c84-9b6b-c28b8ae7d638-kube-api-access-w4rp4\") pod \"nmstate-webhook-6b89b748d8-5j92x\" (UID: \"d39cd226-e819-4c84-9b6b-c28b8ae7d638\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.614874 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/46f05915-4deb-45e5-8f4e-109e6c633d4e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.614983 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f05915-4deb-45e5-8f4e-109e6c633d4e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.615090 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86hq\" (UniqueName: \"kubernetes.io/projected/a317caad-8aa4-4620-9812-975486e6c3f6-kube-api-access-d86hq\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.615198 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml57j\" (UniqueName: \"kubernetes.io/projected/46f05915-4deb-45e5-8f4e-109e6c633d4e-kube-api-access-ml57j\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.615311 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d39cd226-e819-4c84-9b6b-c28b8ae7d638-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5j92x\" (UID: \"d39cd226-e819-4c84-9b6b-c28b8ae7d638\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.615449 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-dbus-socket\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.627405 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716606 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-nmstate-lock\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716671 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22fzv\" (UniqueName: \"kubernetes.io/projected/ae90deaf-c915-46ad-b731-7b79e746fffa-kube-api-access-22fzv\") pod \"nmstate-metrics-5dcf9c57c5-49ws5\" (UID: \"ae90deaf-c915-46ad-b731-7b79e746fffa\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716716 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-ovs-socket\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716763 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rp4\" (UniqueName: \"kubernetes.io/projected/d39cd226-e819-4c84-9b6b-c28b8ae7d638-kube-api-access-w4rp4\") pod \"nmstate-webhook-6b89b748d8-5j92x\" (UID: \"d39cd226-e819-4c84-9b6b-c28b8ae7d638\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716769 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-nmstate-lock\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/46f05915-4deb-45e5-8f4e-109e6c633d4e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716881 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f05915-4deb-45e5-8f4e-109e6c633d4e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716923 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86hq\" (UniqueName: \"kubernetes.io/projected/a317caad-8aa4-4620-9812-975486e6c3f6-kube-api-access-d86hq\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716944 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml57j\" (UniqueName: \"kubernetes.io/projected/46f05915-4deb-45e5-8f4e-109e6c633d4e-kube-api-access-ml57j\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716952 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-ovs-socket\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.716976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d39cd226-e819-4c84-9b6b-c28b8ae7d638-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5j92x\" (UID: \"d39cd226-e819-4c84-9b6b-c28b8ae7d638\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.717165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-dbus-socket\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.717780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a317caad-8aa4-4620-9812-975486e6c3f6-dbus-socket\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.720908 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/46f05915-4deb-45e5-8f4e-109e6c633d4e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.732928 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d39cd226-e819-4c84-9b6b-c28b8ae7d638-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5j92x\" (UID: \"d39cd226-e819-4c84-9b6b-c28b8ae7d638\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.733784 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/46f05915-4deb-45e5-8f4e-109e6c633d4e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.736805 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22fzv\" (UniqueName: \"kubernetes.io/projected/ae90deaf-c915-46ad-b731-7b79e746fffa-kube-api-access-22fzv\") pod \"nmstate-metrics-5dcf9c57c5-49ws5\" (UID: \"ae90deaf-c915-46ad-b731-7b79e746fffa\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.737499 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86hq\" (UniqueName: \"kubernetes.io/projected/a317caad-8aa4-4620-9812-975486e6c3f6-kube-api-access-d86hq\") pod \"nmstate-handler-m899x\" (UID: \"a317caad-8aa4-4620-9812-975486e6c3f6\") " pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.737834 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml57j\" (UniqueName: \"kubernetes.io/projected/46f05915-4deb-45e5-8f4e-109e6c633d4e-kube-api-access-ml57j\") pod \"nmstate-console-plugin-5874bd7bc5-vkkht\" (UID: \"46f05915-4deb-45e5-8f4e-109e6c633d4e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.739601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rp4\" (UniqueName: \"kubernetes.io/projected/d39cd226-e819-4c84-9b6b-c28b8ae7d638-kube-api-access-w4rp4\") pod \"nmstate-webhook-6b89b748d8-5j92x\" (UID: \"d39cd226-e819-4c84-9b6b-c28b8ae7d638\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.752436 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.769873 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.804338 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f55c9947c-tc6x6"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.805391 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.806681 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819417 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5rm\" (UniqueName: \"kubernetes.io/projected/ace620dd-9305-41c2-a78f-f52c221ae850-kube-api-access-sp5rm\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819542 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-oauth-config\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819575 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-console-config\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-oauth-serving-cert\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-trusted-ca-bundle\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-service-ca\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.819654 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-serving-cert\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.824611 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f55c9947c-tc6x6"] Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.871034 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.906345 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.920974 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-oauth-config\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.921417 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-console-config\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.921448 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-oauth-serving-cert\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.921470 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-trusted-ca-bundle\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.921493 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-service-ca\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.921515 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-serving-cert\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.921544 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp5rm\" (UniqueName: \"kubernetes.io/projected/ace620dd-9305-41c2-a78f-f52c221ae850-kube-api-access-sp5rm\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.923162 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-service-ca\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.923463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-console-config\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.923509 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-oauth-serving-cert\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.931247 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-trusted-ca-bundle\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.933515 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-oauth-config\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.952852 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-serving-cert\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.962418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp5rm\" (UniqueName: \"kubernetes.io/projected/ace620dd-9305-41c2-a78f-f52c221ae850-kube-api-access-sp5rm\") pod \"console-7f55c9947c-tc6x6\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:20 crc kubenswrapper[4904]: I1121 13:48:20.999286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-m899x" event={"ID":"a317caad-8aa4-4620-9812-975486e6c3f6","Type":"ContainerStarted","Data":"7e695fa5efb12bcecf3bcaa8634c811cac34d6990cb558e03a3aa023be8e6b8a"} Nov 21 13:48:21 crc kubenswrapper[4904]: I1121 13:48:21.173976 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x"] Nov 21 13:48:21 crc kubenswrapper[4904]: I1121 13:48:21.185817 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:21 crc kubenswrapper[4904]: I1121 13:48:21.317196 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht"] Nov 21 13:48:21 crc kubenswrapper[4904]: W1121 13:48:21.328746 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f05915_4deb_45e5_8f4e_109e6c633d4e.slice/crio-7694d333a252e250b2dfd22743083012b3a8019b7f1a4bd38441deee1d0ce933 WatchSource:0}: Error finding container 7694d333a252e250b2dfd22743083012b3a8019b7f1a4bd38441deee1d0ce933: Status 404 returned error can't find the container with id 7694d333a252e250b2dfd22743083012b3a8019b7f1a4bd38441deee1d0ce933 Nov 21 13:48:21 crc kubenswrapper[4904]: I1121 13:48:21.413390 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f55c9947c-tc6x6"] Nov 21 13:48:21 crc kubenswrapper[4904]: W1121 13:48:21.419128 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace620dd_9305_41c2_a78f_f52c221ae850.slice/crio-657e1b3e7e5fcfcbac1e753d23fbf2b8ecbd0853f41a7b4afcdd59db2be83c3b WatchSource:0}: Error finding container 657e1b3e7e5fcfcbac1e753d23fbf2b8ecbd0853f41a7b4afcdd59db2be83c3b: Status 404 returned error can't find the container with id 657e1b3e7e5fcfcbac1e753d23fbf2b8ecbd0853f41a7b4afcdd59db2be83c3b Nov 21 13:48:21 crc kubenswrapper[4904]: I1121 13:48:21.477358 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5"] Nov 21 13:48:21 crc kubenswrapper[4904]: W1121 13:48:21.489274 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae90deaf_c915_46ad_b731_7b79e746fffa.slice/crio-269583246ebb1b4c9762b502d1eee631e2d80ee53296bbeb0d59f197a57a052f WatchSource:0}: Error finding container 269583246ebb1b4c9762b502d1eee631e2d80ee53296bbeb0d59f197a57a052f: Status 404 returned error can't find the container with id 269583246ebb1b4c9762b502d1eee631e2d80ee53296bbeb0d59f197a57a052f Nov 21 13:48:22 crc kubenswrapper[4904]: I1121 13:48:22.006505 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" event={"ID":"46f05915-4deb-45e5-8f4e-109e6c633d4e","Type":"ContainerStarted","Data":"7694d333a252e250b2dfd22743083012b3a8019b7f1a4bd38441deee1d0ce933"} Nov 21 13:48:22 crc kubenswrapper[4904]: I1121 13:48:22.007367 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" event={"ID":"d39cd226-e819-4c84-9b6b-c28b8ae7d638","Type":"ContainerStarted","Data":"77c4c87c9a76658e9d1c345030e79b34d025a195117e455a1b366a0769537643"} Nov 21 13:48:22 crc kubenswrapper[4904]: I1121 13:48:22.008327 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" event={"ID":"ae90deaf-c915-46ad-b731-7b79e746fffa","Type":"ContainerStarted","Data":"269583246ebb1b4c9762b502d1eee631e2d80ee53296bbeb0d59f197a57a052f"} Nov 21 13:48:22 crc kubenswrapper[4904]: I1121 13:48:22.009755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f55c9947c-tc6x6" event={"ID":"ace620dd-9305-41c2-a78f-f52c221ae850","Type":"ContainerStarted","Data":"8174f83b6168556d4013276f711da3a023497d034035af40f015dfebc3ca83a5"} Nov 21 13:48:22 crc kubenswrapper[4904]: I1121 13:48:22.009782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f55c9947c-tc6x6" event={"ID":"ace620dd-9305-41c2-a78f-f52c221ae850","Type":"ContainerStarted","Data":"657e1b3e7e5fcfcbac1e753d23fbf2b8ecbd0853f41a7b4afcdd59db2be83c3b"} Nov 21 13:48:22 crc kubenswrapper[4904]: I1121 13:48:22.026737 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f55c9947c-tc6x6" podStartSLOduration=2.026710034 podStartE2EDuration="2.026710034s" podCreationTimestamp="2025-11-21 13:48:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:48:22.026429207 +0000 UTC m=+976.147961759" watchObservedRunningTime="2025-11-21 13:48:22.026710034 +0000 UTC m=+976.148242596" Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.050408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" event={"ID":"46f05915-4deb-45e5-8f4e-109e6c633d4e","Type":"ContainerStarted","Data":"9a44e6ad0f9acd4b393640a4d5d9b707125f104ebb57ee987cf19eb719fc2e0c"} Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.053585 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" event={"ID":"d39cd226-e819-4c84-9b6b-c28b8ae7d638","Type":"ContainerStarted","Data":"0b670a32497a57f90a96a588f9827ee82fcf32b01d31107f9067b65a4ecffa86"} Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.053695 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.056942 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" event={"ID":"ae90deaf-c915-46ad-b731-7b79e746fffa","Type":"ContainerStarted","Data":"b9b3d63fc3ddd9a8a7e33b9f803072bc9d0ecca4d7ad64286177918d69a8a58a"} Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.058122 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-m899x" event={"ID":"a317caad-8aa4-4620-9812-975486e6c3f6","Type":"ContainerStarted","Data":"e8dd86dad31a0d7e42083fe0a132041228d376562cdfae489eab4d76efc6b0f6"} Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.058569 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.073781 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-vkkht" podStartSLOduration=2.138772627 podStartE2EDuration="6.073756899s" podCreationTimestamp="2025-11-21 13:48:20 +0000 UTC" firstStartedPulling="2025-11-21 13:48:21.334861724 +0000 UTC m=+975.456394276" lastFinishedPulling="2025-11-21 13:48:25.269845996 +0000 UTC m=+979.391378548" observedRunningTime="2025-11-21 13:48:26.071428442 +0000 UTC m=+980.192961014" watchObservedRunningTime="2025-11-21 13:48:26.073756899 +0000 UTC m=+980.195289451" Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.097167 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-m899x" podStartSLOduration=1.701080355 podStartE2EDuration="6.097149076s" podCreationTimestamp="2025-11-21 13:48:20 +0000 UTC" firstStartedPulling="2025-11-21 13:48:20.870675438 +0000 UTC m=+974.992207990" lastFinishedPulling="2025-11-21 13:48:25.266744149 +0000 UTC m=+979.388276711" observedRunningTime="2025-11-21 13:48:26.092907492 +0000 UTC m=+980.214440044" watchObservedRunningTime="2025-11-21 13:48:26.097149076 +0000 UTC m=+980.218681628" Nov 21 13:48:26 crc kubenswrapper[4904]: I1121 13:48:26.117735 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" podStartSLOduration=2.047394364 podStartE2EDuration="6.117716463s" podCreationTimestamp="2025-11-21 13:48:20 +0000 UTC" firstStartedPulling="2025-11-21 13:48:21.197463586 +0000 UTC m=+975.318996138" lastFinishedPulling="2025-11-21 13:48:25.267785685 +0000 UTC m=+979.389318237" observedRunningTime="2025-11-21 13:48:26.117076568 +0000 UTC m=+980.238609120" watchObservedRunningTime="2025-11-21 13:48:26.117716463 +0000 UTC m=+980.239249015" Nov 21 13:48:28 crc kubenswrapper[4904]: I1121 13:48:28.073507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" event={"ID":"ae90deaf-c915-46ad-b731-7b79e746fffa","Type":"ContainerStarted","Data":"a59924023944ecd8e94d4c8864fd0576b16ef091ba1fa51cb8a0d3f665ed06d7"} Nov 21 13:48:28 crc kubenswrapper[4904]: I1121 13:48:28.091262 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-49ws5" podStartSLOduration=1.7394828119999999 podStartE2EDuration="8.091240398s" podCreationTimestamp="2025-11-21 13:48:20 +0000 UTC" firstStartedPulling="2025-11-21 13:48:21.492212534 +0000 UTC m=+975.613745096" lastFinishedPulling="2025-11-21 13:48:27.84397013 +0000 UTC m=+981.965502682" observedRunningTime="2025-11-21 13:48:28.090872659 +0000 UTC m=+982.212405221" watchObservedRunningTime="2025-11-21 13:48:28.091240398 +0000 UTC m=+982.212772950" Nov 21 13:48:30 crc kubenswrapper[4904]: I1121 13:48:30.843253 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-m899x" Nov 21 13:48:31 crc kubenswrapper[4904]: I1121 13:48:31.186714 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:31 crc kubenswrapper[4904]: I1121 13:48:31.187076 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:31 crc kubenswrapper[4904]: I1121 13:48:31.193572 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:32 crc kubenswrapper[4904]: I1121 13:48:32.105699 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:48:32 crc kubenswrapper[4904]: I1121 13:48:32.164848 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g69x5"] Nov 21 13:48:40 crc kubenswrapper[4904]: I1121 13:48:40.778740 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5j92x" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.220097 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-g69x5" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerName="console" containerID="cri-o://0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418" gracePeriod=15 Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.639805 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g69x5_dff8d615-db53-4198-adbd-b6f5fc1cd2be/console/0.log" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.640193 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775234 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttdfj\" (UniqueName: \"kubernetes.io/projected/dff8d615-db53-4198-adbd-b6f5fc1cd2be-kube-api-access-ttdfj\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775761 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-config\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775798 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-trusted-ca-bundle\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775859 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-oauth-serving-cert\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775910 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-service-ca\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775935 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-serving-cert\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.775990 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-oauth-config\") pod \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\" (UID: \"dff8d615-db53-4198-adbd-b6f5fc1cd2be\") " Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.776715 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.776639 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-config" (OuterVolumeSpecName: "console-config") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.776866 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-service-ca" (OuterVolumeSpecName: "service-ca") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.776971 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.797482 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5"] Nov 21 13:48:57 crc kubenswrapper[4904]: E1121 13:48:57.797943 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerName="console" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.797966 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerName="console" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.798132 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerName="console" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.799468 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.802091 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.804851 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.805377 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.809170 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff8d615-db53-4198-adbd-b6f5fc1cd2be-kube-api-access-ttdfj" (OuterVolumeSpecName: "kube-api-access-ttdfj") pod "dff8d615-db53-4198-adbd-b6f5fc1cd2be" (UID: "dff8d615-db53-4198-adbd-b6f5fc1cd2be"). InnerVolumeSpecName "kube-api-access-ttdfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.865580 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5"] Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877287 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5xm\" (UniqueName: \"kubernetes.io/projected/659c44b1-d13a-499e-96f2-3238040bfb51-kube-api-access-nc5xm\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877376 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877474 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877542 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttdfj\" (UniqueName: \"kubernetes.io/projected/dff8d615-db53-4198-adbd-b6f5fc1cd2be-kube-api-access-ttdfj\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877556 4904 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877566 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877576 4904 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877585 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dff8d615-db53-4198-adbd-b6f5fc1cd2be-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877595 4904 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.877606 4904 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dff8d615-db53-4198-adbd-b6f5fc1cd2be-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.979022 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5xm\" (UniqueName: \"kubernetes.io/projected/659c44b1-d13a-499e-96f2-3238040bfb51-kube-api-access-nc5xm\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.979113 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.979204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.979825 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:57 crc kubenswrapper[4904]: I1121 13:48:57.979831 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.001038 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5xm\" (UniqueName: \"kubernetes.io/projected/659c44b1-d13a-499e-96f2-3238040bfb51-kube-api-access-nc5xm\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.180601 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.317229 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-g69x5_dff8d615-db53-4198-adbd-b6f5fc1cd2be/console/0.log" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.317305 4904 generic.go:334] "Generic (PLEG): container finished" podID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" containerID="0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418" exitCode=2 Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.317350 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g69x5" event={"ID":"dff8d615-db53-4198-adbd-b6f5fc1cd2be","Type":"ContainerDied","Data":"0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418"} Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.317382 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-g69x5" event={"ID":"dff8d615-db53-4198-adbd-b6f5fc1cd2be","Type":"ContainerDied","Data":"dda35c1d9f542f037a3111dbbf291e6ea910f45902316083c1a3d985e4b160ff"} Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.317405 4904 scope.go:117] "RemoveContainer" containerID="0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.317409 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-g69x5" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.343189 4904 scope.go:117] "RemoveContainer" containerID="0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418" Nov 21 13:48:58 crc kubenswrapper[4904]: E1121 13:48:58.344001 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418\": container with ID starting with 0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418 not found: ID does not exist" containerID="0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.344087 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418"} err="failed to get container status \"0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418\": rpc error: code = NotFound desc = could not find container \"0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418\": container with ID starting with 0d06642954c00755e4635135376d25e689244566d06f7a2e83d20df340860418 not found: ID does not exist" Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.394231 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-g69x5"] Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.407166 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-g69x5"] Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.441728 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5"] Nov 21 13:48:58 crc kubenswrapper[4904]: I1121 13:48:58.524780 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff8d615-db53-4198-adbd-b6f5fc1cd2be" path="/var/lib/kubelet/pods/dff8d615-db53-4198-adbd-b6f5fc1cd2be/volumes" Nov 21 13:48:59 crc kubenswrapper[4904]: I1121 13:48:59.326232 4904 generic.go:334] "Generic (PLEG): container finished" podID="659c44b1-d13a-499e-96f2-3238040bfb51" containerID="7a71e357edfd0b0901a9e4f2020ebe56fc65b18dac0c949a33b4077b6cc57003" exitCode=0 Nov 21 13:48:59 crc kubenswrapper[4904]: I1121 13:48:59.326387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" event={"ID":"659c44b1-d13a-499e-96f2-3238040bfb51","Type":"ContainerDied","Data":"7a71e357edfd0b0901a9e4f2020ebe56fc65b18dac0c949a33b4077b6cc57003"} Nov 21 13:48:59 crc kubenswrapper[4904]: I1121 13:48:59.327018 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" event={"ID":"659c44b1-d13a-499e-96f2-3238040bfb51","Type":"ContainerStarted","Data":"72b430609d1cb1a754bbccef2673b7a35772f923d6028118e46f91101dd5754c"} Nov 21 13:49:01 crc kubenswrapper[4904]: I1121 13:49:01.339707 4904 generic.go:334] "Generic (PLEG): container finished" podID="659c44b1-d13a-499e-96f2-3238040bfb51" containerID="97cb9cbb7cafeb3c63d2dcf48c8b5dc84c8a4aef5f61dee6563b8868ae1f67cd" exitCode=0 Nov 21 13:49:01 crc kubenswrapper[4904]: I1121 13:49:01.339826 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" event={"ID":"659c44b1-d13a-499e-96f2-3238040bfb51","Type":"ContainerDied","Data":"97cb9cbb7cafeb3c63d2dcf48c8b5dc84c8a4aef5f61dee6563b8868ae1f67cd"} Nov 21 13:49:02 crc kubenswrapper[4904]: I1121 13:49:02.351781 4904 generic.go:334] "Generic (PLEG): container finished" podID="659c44b1-d13a-499e-96f2-3238040bfb51" containerID="dd4662e70061c69fcbb6935426fb010fbe22b4cf6cafa820d8e2e8cebae2b0f6" exitCode=0 Nov 21 13:49:02 crc kubenswrapper[4904]: I1121 13:49:02.351885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" event={"ID":"659c44b1-d13a-499e-96f2-3238040bfb51","Type":"ContainerDied","Data":"dd4662e70061c69fcbb6935426fb010fbe22b4cf6cafa820d8e2e8cebae2b0f6"} Nov 21 13:49:03 crc kubenswrapper[4904]: I1121 13:49:03.764610 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:49:03 crc kubenswrapper[4904]: I1121 13:49:03.913181 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc5xm\" (UniqueName: \"kubernetes.io/projected/659c44b1-d13a-499e-96f2-3238040bfb51-kube-api-access-nc5xm\") pod \"659c44b1-d13a-499e-96f2-3238040bfb51\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " Nov 21 13:49:03 crc kubenswrapper[4904]: I1121 13:49:03.913305 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-util\") pod \"659c44b1-d13a-499e-96f2-3238040bfb51\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " Nov 21 13:49:03 crc kubenswrapper[4904]: I1121 13:49:03.913411 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-bundle\") pod \"659c44b1-d13a-499e-96f2-3238040bfb51\" (UID: \"659c44b1-d13a-499e-96f2-3238040bfb51\") " Nov 21 13:49:03 crc kubenswrapper[4904]: I1121 13:49:03.914722 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-bundle" (OuterVolumeSpecName: "bundle") pod "659c44b1-d13a-499e-96f2-3238040bfb51" (UID: "659c44b1-d13a-499e-96f2-3238040bfb51"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:49:03 crc kubenswrapper[4904]: I1121 13:49:03.922434 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659c44b1-d13a-499e-96f2-3238040bfb51-kube-api-access-nc5xm" (OuterVolumeSpecName: "kube-api-access-nc5xm") pod "659c44b1-d13a-499e-96f2-3238040bfb51" (UID: "659c44b1-d13a-499e-96f2-3238040bfb51"). InnerVolumeSpecName "kube-api-access-nc5xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.015395 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc5xm\" (UniqueName: \"kubernetes.io/projected/659c44b1-d13a-499e-96f2-3238040bfb51-kube-api-access-nc5xm\") on node \"crc\" DevicePath \"\"" Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.015441 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.245352 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-util" (OuterVolumeSpecName: "util") pod "659c44b1-d13a-499e-96f2-3238040bfb51" (UID: "659c44b1-d13a-499e-96f2-3238040bfb51"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.321212 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/659c44b1-d13a-499e-96f2-3238040bfb51-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.372041 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" event={"ID":"659c44b1-d13a-499e-96f2-3238040bfb51","Type":"ContainerDied","Data":"72b430609d1cb1a754bbccef2673b7a35772f923d6028118e46f91101dd5754c"} Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.372100 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b430609d1cb1a754bbccef2673b7a35772f923d6028118e46f91101dd5754c" Nov 21 13:49:04 crc kubenswrapper[4904]: I1121 13:49:04.372336 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.801506 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg"] Nov 21 13:49:11 crc kubenswrapper[4904]: E1121 13:49:11.802462 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="pull" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.802508 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="pull" Nov 21 13:49:11 crc kubenswrapper[4904]: E1121 13:49:11.802537 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="extract" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.802545 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="extract" Nov 21 13:49:11 crc kubenswrapper[4904]: E1121 13:49:11.802562 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="util" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.802571 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="util" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.802739 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="659c44b1-d13a-499e-96f2-3238040bfb51" containerName="extract" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.803404 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.805482 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.811674 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.811674 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.813459 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-szzq2" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.813505 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.826064 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg"] Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.934604 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-webhook-cert\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.934665 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxmg\" (UniqueName: \"kubernetes.io/projected/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-kube-api-access-qkxmg\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:11 crc kubenswrapper[4904]: I1121 13:49:11.934927 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-apiservice-cert\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.036066 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkxmg\" (UniqueName: \"kubernetes.io/projected/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-kube-api-access-qkxmg\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.036122 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-webhook-cert\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.036195 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-apiservice-cert\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.040336 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q"] Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.045071 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.045846 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-apiservice-cert\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.049015 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-7g887" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.049752 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.049878 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.053284 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-webhook-cert\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.057101 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkxmg\" (UniqueName: \"kubernetes.io/projected/20d3a8a7-19b0-4ae0-8f88-5e16945e90db-kube-api-access-qkxmg\") pod \"metallb-operator-controller-manager-7c88768dbc-5mgsg\" (UID: \"20d3a8a7-19b0-4ae0-8f88-5e16945e90db\") " pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.062090 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q"] Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.121985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.137940 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc2aa529-f9e7-4b13-94e8-e766abd5904f-webhook-cert\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.137991 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc2aa529-f9e7-4b13-94e8-e766abd5904f-apiservice-cert\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.138084 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6cd\" (UniqueName: \"kubernetes.io/projected/bc2aa529-f9e7-4b13-94e8-e766abd5904f-kube-api-access-md6cd\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.239224 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md6cd\" (UniqueName: \"kubernetes.io/projected/bc2aa529-f9e7-4b13-94e8-e766abd5904f-kube-api-access-md6cd\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.239301 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc2aa529-f9e7-4b13-94e8-e766abd5904f-webhook-cert\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.239326 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc2aa529-f9e7-4b13-94e8-e766abd5904f-apiservice-cert\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.243719 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc2aa529-f9e7-4b13-94e8-e766abd5904f-webhook-cert\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.247155 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc2aa529-f9e7-4b13-94e8-e766abd5904f-apiservice-cert\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.270635 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md6cd\" (UniqueName: \"kubernetes.io/projected/bc2aa529-f9e7-4b13-94e8-e766abd5904f-kube-api-access-md6cd\") pod \"metallb-operator-webhook-server-8d7cf7c8c-8rg4q\" (UID: \"bc2aa529-f9e7-4b13-94e8-e766abd5904f\") " pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.423526 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.497456 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg"] Nov 21 13:49:12 crc kubenswrapper[4904]: I1121 13:49:12.906522 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q"] Nov 21 13:49:12 crc kubenswrapper[4904]: W1121 13:49:12.917872 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc2aa529_f9e7_4b13_94e8_e766abd5904f.slice/crio-8c3dbbbd29bbb9d4eca967e568a4a63e15cccd78eb7b13a0a951e1df1d5a8780 WatchSource:0}: Error finding container 8c3dbbbd29bbb9d4eca967e568a4a63e15cccd78eb7b13a0a951e1df1d5a8780: Status 404 returned error can't find the container with id 8c3dbbbd29bbb9d4eca967e568a4a63e15cccd78eb7b13a0a951e1df1d5a8780 Nov 21 13:49:13 crc kubenswrapper[4904]: I1121 13:49:13.439795 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" event={"ID":"bc2aa529-f9e7-4b13-94e8-e766abd5904f","Type":"ContainerStarted","Data":"8c3dbbbd29bbb9d4eca967e568a4a63e15cccd78eb7b13a0a951e1df1d5a8780"} Nov 21 13:49:13 crc kubenswrapper[4904]: I1121 13:49:13.440781 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" event={"ID":"20d3a8a7-19b0-4ae0-8f88-5e16945e90db","Type":"ContainerStarted","Data":"f93f18ac260274c56e3dca9be9a3858b691b75c94de6d8b74b6acdc764af3716"} Nov 21 13:49:17 crc kubenswrapper[4904]: I1121 13:49:17.479773 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" event={"ID":"20d3a8a7-19b0-4ae0-8f88-5e16945e90db","Type":"ContainerStarted","Data":"faa4ebcc9f6211b322bc77d203a136307cd3eea051ee63ad78fff0beeffd6774"} Nov 21 13:49:17 crc kubenswrapper[4904]: I1121 13:49:17.480626 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:17 crc kubenswrapper[4904]: I1121 13:49:17.512867 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" podStartSLOduration=2.696966157 podStartE2EDuration="6.512836051s" podCreationTimestamp="2025-11-21 13:49:11 +0000 UTC" firstStartedPulling="2025-11-21 13:49:12.522104006 +0000 UTC m=+1026.643636568" lastFinishedPulling="2025-11-21 13:49:16.33797391 +0000 UTC m=+1030.459506462" observedRunningTime="2025-11-21 13:49:17.5022273 +0000 UTC m=+1031.623759872" watchObservedRunningTime="2025-11-21 13:49:17.512836051 +0000 UTC m=+1031.634368603" Nov 21 13:49:19 crc kubenswrapper[4904]: I1121 13:49:19.496760 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" event={"ID":"bc2aa529-f9e7-4b13-94e8-e766abd5904f","Type":"ContainerStarted","Data":"f8aaaf861acea4bca8bdf16682a5652afd284a6d2b7f94b7e9c692c04346b650"} Nov 21 13:49:19 crc kubenswrapper[4904]: I1121 13:49:19.498385 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:19 crc kubenswrapper[4904]: I1121 13:49:19.517448 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" podStartSLOduration=1.8500415399999999 podStartE2EDuration="7.517420721s" podCreationTimestamp="2025-11-21 13:49:12 +0000 UTC" firstStartedPulling="2025-11-21 13:49:12.922998461 +0000 UTC m=+1027.044531013" lastFinishedPulling="2025-11-21 13:49:18.590377642 +0000 UTC m=+1032.711910194" observedRunningTime="2025-11-21 13:49:19.514322645 +0000 UTC m=+1033.635855197" watchObservedRunningTime="2025-11-21 13:49:19.517420721 +0000 UTC m=+1033.638953273" Nov 21 13:49:32 crc kubenswrapper[4904]: I1121 13:49:32.435145 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8d7cf7c8c-8rg4q" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.126290 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7c88768dbc-5mgsg" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.860217 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z4mtp"] Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.864114 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.864349 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t"] Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.865565 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.870095 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zlxs5" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.871722 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.883850 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.896090 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.910832 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t"] Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919749 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-cert\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919806 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-conf\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919841 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31b4b535-09fb-4c70-8a63-f07880e79fb2-metrics-certs\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919864 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-reloader\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919895 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-metrics\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919924 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mddrl\" (UniqueName: \"kubernetes.io/projected/31b4b535-09fb-4c70-8a63-f07880e79fb2-kube-api-access-mddrl\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919946 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-startup\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919971 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht8zr\" (UniqueName: \"kubernetes.io/projected/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-kube-api-access-ht8zr\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.919994 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-sockets\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.959921 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-sfp7g"] Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.961030 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-sfp7g" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.990686 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.990942 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.991082 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 21 13:49:52 crc kubenswrapper[4904]: I1121 13:49:52.991201 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rmhrr" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.003207 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-q4s9b"] Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.011783 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.016065 4904 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.021957 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022071 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-cert\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:53 crc kubenswrapper[4904]: E1121 13:49:53.022229 4904 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 21 13:49:53 crc kubenswrapper[4904]: E1121 13:49:53.022303 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-cert podName:12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209 nodeName:}" failed. No retries permitted until 2025-11-21 13:49:53.522282239 +0000 UTC m=+1067.643814791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-cert") pod "frr-k8s-webhook-server-6998585d5-9dv2t" (UID: "12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209") : secret "frr-k8s-webhook-server-cert" not found Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022614 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-conf\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022672 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-metrics-certs\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022726 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31b4b535-09fb-4c70-8a63-f07880e79fb2-metrics-certs\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022756 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvw7b\" (UniqueName: \"kubernetes.io/projected/c20302bf-7009-4422-a47b-6b7fb5b14a99-kube-api-access-xvw7b\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022783 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-reloader\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022606 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-conf\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022820 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-metrics\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.022985 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mddrl\" (UniqueName: \"kubernetes.io/projected/31b4b535-09fb-4c70-8a63-f07880e79fb2-kube-api-access-mddrl\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023050 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-startup\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023127 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht8zr\" (UniqueName: \"kubernetes.io/projected/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-kube-api-access-ht8zr\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023181 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c20302bf-7009-4422-a47b-6b7fb5b14a99-metallb-excludel2\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023229 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-metrics\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023232 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-sockets\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023450 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-sockets\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.023542 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/31b4b535-09fb-4c70-8a63-f07880e79fb2-reloader\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.026673 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/31b4b535-09fb-4c70-8a63-f07880e79fb2-frr-startup\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.040187 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/31b4b535-09fb-4c70-8a63-f07880e79fb2-metrics-certs\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.062240 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-q4s9b"] Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.064881 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht8zr\" (UniqueName: \"kubernetes.io/projected/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-kube-api-access-ht8zr\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.074303 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mddrl\" (UniqueName: \"kubernetes.io/projected/31b4b535-09fb-4c70-8a63-f07880e79fb2-kube-api-access-mddrl\") pod \"frr-k8s-z4mtp\" (UID: \"31b4b535-09fb-4c70-8a63-f07880e79fb2\") " pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.126514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.126602 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-metrics-certs\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.126648 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37655929-be52-46f4-b914-4500bada3dac-metrics-certs\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.126741 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvw7b\" (UniqueName: \"kubernetes.io/projected/c20302bf-7009-4422-a47b-6b7fb5b14a99-kube-api-access-xvw7b\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.126859 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37655929-be52-46f4-b914-4500bada3dac-cert\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: E1121 13:49:53.126973 4904 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 13:49:53 crc kubenswrapper[4904]: E1121 13:49:53.127019 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist podName:c20302bf-7009-4422-a47b-6b7fb5b14a99 nodeName:}" failed. No retries permitted until 2025-11-21 13:49:53.627002903 +0000 UTC m=+1067.748535455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist") pod "speaker-sfp7g" (UID: "c20302bf-7009-4422-a47b-6b7fb5b14a99") : secret "metallb-memberlist" not found Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.127049 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c20302bf-7009-4422-a47b-6b7fb5b14a99-metallb-excludel2\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.127112 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mtp\" (UniqueName: \"kubernetes.io/projected/37655929-be52-46f4-b914-4500bada3dac-kube-api-access-x4mtp\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.127980 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c20302bf-7009-4422-a47b-6b7fb5b14a99-metallb-excludel2\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.132211 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-metrics-certs\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.179969 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvw7b\" (UniqueName: \"kubernetes.io/projected/c20302bf-7009-4422-a47b-6b7fb5b14a99-kube-api-access-xvw7b\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.189965 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.228961 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4mtp\" (UniqueName: \"kubernetes.io/projected/37655929-be52-46f4-b914-4500bada3dac-kube-api-access-x4mtp\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.229064 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37655929-be52-46f4-b914-4500bada3dac-metrics-certs\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.229093 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37655929-be52-46f4-b914-4500bada3dac-cert\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.232713 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37655929-be52-46f4-b914-4500bada3dac-cert\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.233483 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37655929-be52-46f4-b914-4500bada3dac-metrics-certs\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.246382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4mtp\" (UniqueName: \"kubernetes.io/projected/37655929-be52-46f4-b914-4500bada3dac-kube-api-access-x4mtp\") pod \"controller-6c7b4b5f48-q4s9b\" (UID: \"37655929-be52-46f4-b914-4500bada3dac\") " pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.330927 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.533773 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-cert\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.540372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209-cert\") pod \"frr-k8s-webhook-server-6998585d5-9dv2t\" (UID: \"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.636116 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:53 crc kubenswrapper[4904]: E1121 13:49:53.636317 4904 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 21 13:49:53 crc kubenswrapper[4904]: E1121 13:49:53.636423 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist podName:c20302bf-7009-4422-a47b-6b7fb5b14a99 nodeName:}" failed. No retries permitted until 2025-11-21 13:49:54.636390141 +0000 UTC m=+1068.757922903 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist") pod "speaker-sfp7g" (UID: "c20302bf-7009-4422-a47b-6b7fb5b14a99") : secret "metallb-memberlist" not found Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.770493 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"1cffdd89a4b96b00af9b64395417bdbd174316bc270cb58ac5c611f5f187b28b"} Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.796563 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-q4s9b"] Nov 21 13:49:53 crc kubenswrapper[4904]: W1121 13:49:53.800795 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37655929_be52_46f4_b914_4500bada3dac.slice/crio-95ba5c5c9ae39b00f5c2af8fe5002bc85c4f0b6091fe5525d7a733ce6911c18c WatchSource:0}: Error finding container 95ba5c5c9ae39b00f5c2af8fe5002bc85c4f0b6091fe5525d7a733ce6911c18c: Status 404 returned error can't find the container with id 95ba5c5c9ae39b00f5c2af8fe5002bc85c4f0b6091fe5525d7a733ce6911c18c Nov 21 13:49:53 crc kubenswrapper[4904]: I1121 13:49:53.814460 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.070996 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t"] Nov 21 13:49:54 crc kubenswrapper[4904]: W1121 13:49:54.077929 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12c5fcd1_2cbd_47f8_b16d_dc3c8e34d209.slice/crio-1b6d3f608271f6c0919b21967171e6991c2bcd48b88600f67472707f3ee043b8 WatchSource:0}: Error finding container 1b6d3f608271f6c0919b21967171e6991c2bcd48b88600f67472707f3ee043b8: Status 404 returned error can't find the container with id 1b6d3f608271f6c0919b21967171e6991c2bcd48b88600f67472707f3ee043b8 Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.684888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.692761 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c20302bf-7009-4422-a47b-6b7fb5b14a99-memberlist\") pod \"speaker-sfp7g\" (UID: \"c20302bf-7009-4422-a47b-6b7fb5b14a99\") " pod="metallb-system/speaker-sfp7g" Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.777429 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-sfp7g" Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.780123 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-q4s9b" event={"ID":"37655929-be52-46f4-b914-4500bada3dac","Type":"ContainerStarted","Data":"116e4dbdc59bd19ce2f20420843b2231ade478f68edff1442a15c2a0211c1a27"} Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.780184 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-q4s9b" event={"ID":"37655929-be52-46f4-b914-4500bada3dac","Type":"ContainerStarted","Data":"70a05a1f9dc33eda9cc0f8821a502e81c13be3448202d618ed8af67fc204ab17"} Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.780199 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-q4s9b" event={"ID":"37655929-be52-46f4-b914-4500bada3dac","Type":"ContainerStarted","Data":"95ba5c5c9ae39b00f5c2af8fe5002bc85c4f0b6091fe5525d7a733ce6911c18c"} Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.780294 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.781535 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" event={"ID":"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209","Type":"ContainerStarted","Data":"1b6d3f608271f6c0919b21967171e6991c2bcd48b88600f67472707f3ee043b8"} Nov 21 13:49:54 crc kubenswrapper[4904]: I1121 13:49:54.808459 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-q4s9b" podStartSLOduration=2.808399106 podStartE2EDuration="2.808399106s" podCreationTimestamp="2025-11-21 13:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:49:54.800200357 +0000 UTC m=+1068.921732949" watchObservedRunningTime="2025-11-21 13:49:54.808399106 +0000 UTC m=+1068.929931678" Nov 21 13:49:54 crc kubenswrapper[4904]: W1121 13:49:54.822314 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20302bf_7009_4422_a47b_6b7fb5b14a99.slice/crio-49e607632838299d45aec14666ccd5ce901a264f249197fe91d1e5f8c0c39c7c WatchSource:0}: Error finding container 49e607632838299d45aec14666ccd5ce901a264f249197fe91d1e5f8c0c39c7c: Status 404 returned error can't find the container with id 49e607632838299d45aec14666ccd5ce901a264f249197fe91d1e5f8c0c39c7c Nov 21 13:49:55 crc kubenswrapper[4904]: I1121 13:49:55.795306 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-sfp7g" event={"ID":"c20302bf-7009-4422-a47b-6b7fb5b14a99","Type":"ContainerStarted","Data":"d832f4e18e742794e40a0aedda9218be6e41f8531865eebe8059e5fa301099b6"} Nov 21 13:49:55 crc kubenswrapper[4904]: I1121 13:49:55.795686 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-sfp7g" event={"ID":"c20302bf-7009-4422-a47b-6b7fb5b14a99","Type":"ContainerStarted","Data":"b8c2a805f37349ed893d9c714faf91a91b341d49491c27b3c1b91a3f354eb03f"} Nov 21 13:49:55 crc kubenswrapper[4904]: I1121 13:49:55.795702 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-sfp7g" event={"ID":"c20302bf-7009-4422-a47b-6b7fb5b14a99","Type":"ContainerStarted","Data":"49e607632838299d45aec14666ccd5ce901a264f249197fe91d1e5f8c0c39c7c"} Nov 21 13:49:55 crc kubenswrapper[4904]: I1121 13:49:55.796109 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-sfp7g" Nov 21 13:49:56 crc kubenswrapper[4904]: I1121 13:49:56.537995 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-sfp7g" podStartSLOduration=4.537970495 podStartE2EDuration="4.537970495s" podCreationTimestamp="2025-11-21 13:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:49:55.838098121 +0000 UTC m=+1069.959630683" watchObservedRunningTime="2025-11-21 13:49:56.537970495 +0000 UTC m=+1070.659503047" Nov 21 13:49:58 crc kubenswrapper[4904]: I1121 13:49:58.113799 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:49:58 crc kubenswrapper[4904]: I1121 13:49:58.114195 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:50:01 crc kubenswrapper[4904]: I1121 13:50:01.852504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" event={"ID":"12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209","Type":"ContainerStarted","Data":"bff4a647740f9f9266d65a4bfebac45c3748a2c36bc8620cbd5d2e77cc86acaa"} Nov 21 13:50:01 crc kubenswrapper[4904]: I1121 13:50:01.853154 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:50:01 crc kubenswrapper[4904]: I1121 13:50:01.855112 4904 generic.go:334] "Generic (PLEG): container finished" podID="31b4b535-09fb-4c70-8a63-f07880e79fb2" containerID="9ed012ac464b38746fcfbf708bd27f7de735fd1b1948e2e3493032fe07d76ae1" exitCode=0 Nov 21 13:50:01 crc kubenswrapper[4904]: I1121 13:50:01.855140 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerDied","Data":"9ed012ac464b38746fcfbf708bd27f7de735fd1b1948e2e3493032fe07d76ae1"} Nov 21 13:50:01 crc kubenswrapper[4904]: I1121 13:50:01.886537 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" podStartSLOduration=2.43409734 podStartE2EDuration="9.886500185s" podCreationTimestamp="2025-11-21 13:49:52 +0000 UTC" firstStartedPulling="2025-11-21 13:49:54.080571641 +0000 UTC m=+1068.202104183" lastFinishedPulling="2025-11-21 13:50:01.532974476 +0000 UTC m=+1075.654507028" observedRunningTime="2025-11-21 13:50:01.873425066 +0000 UTC m=+1075.994957628" watchObservedRunningTime="2025-11-21 13:50:01.886500185 +0000 UTC m=+1076.008032767" Nov 21 13:50:02 crc kubenswrapper[4904]: I1121 13:50:02.864864 4904 generic.go:334] "Generic (PLEG): container finished" podID="31b4b535-09fb-4c70-8a63-f07880e79fb2" containerID="be2751390c8efd78f3ef41e21c07e7fe1cbb1abb898003f401d41725e214c0c6" exitCode=0 Nov 21 13:50:02 crc kubenswrapper[4904]: I1121 13:50:02.864943 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerDied","Data":"be2751390c8efd78f3ef41e21c07e7fe1cbb1abb898003f401d41725e214c0c6"} Nov 21 13:50:03 crc kubenswrapper[4904]: I1121 13:50:03.335270 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-q4s9b" Nov 21 13:50:03 crc kubenswrapper[4904]: I1121 13:50:03.882058 4904 generic.go:334] "Generic (PLEG): container finished" podID="31b4b535-09fb-4c70-8a63-f07880e79fb2" containerID="41cf7f276f68e6666b3b97235dca72db006912e4db5fd6e6ca24b3b1c4b100d4" exitCode=0 Nov 21 13:50:03 crc kubenswrapper[4904]: I1121 13:50:03.882128 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerDied","Data":"41cf7f276f68e6666b3b97235dca72db006912e4db5fd6e6ca24b3b1c4b100d4"} Nov 21 13:50:04 crc kubenswrapper[4904]: I1121 13:50:04.897197 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"d3c9a8a24ef554fb73361b592383ac99ae350b152cf1321cc448c828b002bfc7"} Nov 21 13:50:04 crc kubenswrapper[4904]: I1121 13:50:04.897597 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"2525add7394df166b80734cd9f299af09e8cdcdd861b0c755783c4e811ab9451"} Nov 21 13:50:04 crc kubenswrapper[4904]: I1121 13:50:04.897613 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"d6e5bdebc3c7f5659b6d3af01749fbff6529ffb6f4c734bbe2e1f203b0a3c766"} Nov 21 13:50:04 crc kubenswrapper[4904]: I1121 13:50:04.897625 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"582d88a27a6e402986642a4283610047eb22cc13f0ce1873fceb40f5dfe77dfd"} Nov 21 13:50:04 crc kubenswrapper[4904]: I1121 13:50:04.897640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"d362ef69c441d0567a8b0230127f38ca40c30f30626d7ac27b1bf9447df82280"} Nov 21 13:50:05 crc kubenswrapper[4904]: I1121 13:50:05.909618 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z4mtp" event={"ID":"31b4b535-09fb-4c70-8a63-f07880e79fb2","Type":"ContainerStarted","Data":"f2857bb3fa1495230234df3005e4132bf6426406a2b637fc3830bc911e84d285"} Nov 21 13:50:05 crc kubenswrapper[4904]: I1121 13:50:05.910081 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:50:05 crc kubenswrapper[4904]: I1121 13:50:05.942418 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z4mtp" podStartSLOduration=5.7832155929999995 podStartE2EDuration="13.94239703s" podCreationTimestamp="2025-11-21 13:49:52 +0000 UTC" firstStartedPulling="2025-11-21 13:49:53.352789077 +0000 UTC m=+1067.474321629" lastFinishedPulling="2025-11-21 13:50:01.511970514 +0000 UTC m=+1075.633503066" observedRunningTime="2025-11-21 13:50:05.937596544 +0000 UTC m=+1080.059129096" watchObservedRunningTime="2025-11-21 13:50:05.94239703 +0000 UTC m=+1080.063929572" Nov 21 13:50:08 crc kubenswrapper[4904]: I1121 13:50:08.190974 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:50:08 crc kubenswrapper[4904]: I1121 13:50:08.226127 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:50:13 crc kubenswrapper[4904]: I1121 13:50:13.819985 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-9dv2t" Nov 21 13:50:14 crc kubenswrapper[4904]: I1121 13:50:14.781868 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-sfp7g" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.578813 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-67878"] Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.582188 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.589045 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.592357 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.613122 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-67878"] Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.717048 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5cg\" (UniqueName: \"kubernetes.io/projected/2d41d2f3-0d9f-453f-9417-14b5e56f2130-kube-api-access-sd5cg\") pod \"openstack-operator-index-67878\" (UID: \"2d41d2f3-0d9f-453f-9417-14b5e56f2130\") " pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.818898 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd5cg\" (UniqueName: \"kubernetes.io/projected/2d41d2f3-0d9f-453f-9417-14b5e56f2130-kube-api-access-sd5cg\") pod \"openstack-operator-index-67878\" (UID: \"2d41d2f3-0d9f-453f-9417-14b5e56f2130\") " pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.843939 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd5cg\" (UniqueName: \"kubernetes.io/projected/2d41d2f3-0d9f-453f-9417-14b5e56f2130-kube-api-access-sd5cg\") pod \"openstack-operator-index-67878\" (UID: \"2d41d2f3-0d9f-453f-9417-14b5e56f2130\") " pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:17 crc kubenswrapper[4904]: I1121 13:50:17.919152 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:18 crc kubenswrapper[4904]: I1121 13:50:18.331802 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-67878"] Nov 21 13:50:19 crc kubenswrapper[4904]: I1121 13:50:19.028083 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67878" event={"ID":"2d41d2f3-0d9f-453f-9417-14b5e56f2130","Type":"ContainerStarted","Data":"695a5584ef4b59d3c2fd59a8a12230010d90c99715ad4abba2d284d2e10d0ece"} Nov 21 13:50:20 crc kubenswrapper[4904]: I1121 13:50:20.306180 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-67878"] Nov 21 13:50:20 crc kubenswrapper[4904]: I1121 13:50:20.920323 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7gkpw"] Nov 21 13:50:20 crc kubenswrapper[4904]: I1121 13:50:20.925425 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:20 crc kubenswrapper[4904]: I1121 13:50:20.927521 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vlr9s" Nov 21 13:50:20 crc kubenswrapper[4904]: I1121 13:50:20.934314 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7gkpw"] Nov 21 13:50:20 crc kubenswrapper[4904]: I1121 13:50:20.971943 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5hxc\" (UniqueName: \"kubernetes.io/projected/bd5157a7-e11b-444d-8e6b-d96382adc923-kube-api-access-g5hxc\") pod \"openstack-operator-index-7gkpw\" (UID: \"bd5157a7-e11b-444d-8e6b-d96382adc923\") " pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:21 crc kubenswrapper[4904]: I1121 13:50:21.073337 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5hxc\" (UniqueName: \"kubernetes.io/projected/bd5157a7-e11b-444d-8e6b-d96382adc923-kube-api-access-g5hxc\") pod \"openstack-operator-index-7gkpw\" (UID: \"bd5157a7-e11b-444d-8e6b-d96382adc923\") " pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:21 crc kubenswrapper[4904]: I1121 13:50:21.098131 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5hxc\" (UniqueName: \"kubernetes.io/projected/bd5157a7-e11b-444d-8e6b-d96382adc923-kube-api-access-g5hxc\") pod \"openstack-operator-index-7gkpw\" (UID: \"bd5157a7-e11b-444d-8e6b-d96382adc923\") " pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:21 crc kubenswrapper[4904]: I1121 13:50:21.255554 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:22 crc kubenswrapper[4904]: I1121 13:50:22.796247 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7gkpw"] Nov 21 13:50:23 crc kubenswrapper[4904]: I1121 13:50:23.193196 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z4mtp" Nov 21 13:50:24 crc kubenswrapper[4904]: I1121 13:50:24.077778 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7gkpw" event={"ID":"bd5157a7-e11b-444d-8e6b-d96382adc923","Type":"ContainerStarted","Data":"520f427a21651aa2bb1c563febd12a2779f3658f9f7d15f5e44fd6eb4596e915"} Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.087185 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67878" event={"ID":"2d41d2f3-0d9f-453f-9417-14b5e56f2130","Type":"ContainerStarted","Data":"f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3"} Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.087263 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-67878" podUID="2d41d2f3-0d9f-453f-9417-14b5e56f2130" containerName="registry-server" containerID="cri-o://f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3" gracePeriod=2 Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.093021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7gkpw" event={"ID":"bd5157a7-e11b-444d-8e6b-d96382adc923","Type":"ContainerStarted","Data":"c7d3c0b989c5525026977d3ddf0ac7e9e968f2abec0a961b146edb8fcf1a34df"} Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.149559 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-67878" podStartSLOduration=1.684013879 podStartE2EDuration="8.149531342s" podCreationTimestamp="2025-11-21 13:50:17 +0000 UTC" firstStartedPulling="2025-11-21 13:50:18.339715295 +0000 UTC m=+1092.461247847" lastFinishedPulling="2025-11-21 13:50:24.805232718 +0000 UTC m=+1098.926765310" observedRunningTime="2025-11-21 13:50:25.143371242 +0000 UTC m=+1099.264903804" watchObservedRunningTime="2025-11-21 13:50:25.149531342 +0000 UTC m=+1099.271063904" Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.163310 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7gkpw" podStartSLOduration=3.663019019 podStartE2EDuration="5.163289857s" podCreationTimestamp="2025-11-21 13:50:20 +0000 UTC" firstStartedPulling="2025-11-21 13:50:23.299967908 +0000 UTC m=+1097.421500460" lastFinishedPulling="2025-11-21 13:50:24.800238706 +0000 UTC m=+1098.921771298" observedRunningTime="2025-11-21 13:50:25.160795636 +0000 UTC m=+1099.282328198" watchObservedRunningTime="2025-11-21 13:50:25.163289857 +0000 UTC m=+1099.284822409" Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.537838 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.556904 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd5cg\" (UniqueName: \"kubernetes.io/projected/2d41d2f3-0d9f-453f-9417-14b5e56f2130-kube-api-access-sd5cg\") pod \"2d41d2f3-0d9f-453f-9417-14b5e56f2130\" (UID: \"2d41d2f3-0d9f-453f-9417-14b5e56f2130\") " Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.564121 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d41d2f3-0d9f-453f-9417-14b5e56f2130-kube-api-access-sd5cg" (OuterVolumeSpecName: "kube-api-access-sd5cg") pod "2d41d2f3-0d9f-453f-9417-14b5e56f2130" (UID: "2d41d2f3-0d9f-453f-9417-14b5e56f2130"). InnerVolumeSpecName "kube-api-access-sd5cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:50:25 crc kubenswrapper[4904]: I1121 13:50:25.659185 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd5cg\" (UniqueName: \"kubernetes.io/projected/2d41d2f3-0d9f-453f-9417-14b5e56f2130-kube-api-access-sd5cg\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.104890 4904 generic.go:334] "Generic (PLEG): container finished" podID="2d41d2f3-0d9f-453f-9417-14b5e56f2130" containerID="f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3" exitCode=0 Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.104974 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67878" event={"ID":"2d41d2f3-0d9f-453f-9417-14b5e56f2130","Type":"ContainerDied","Data":"f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3"} Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.105048 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67878" Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.105100 4904 scope.go:117] "RemoveContainer" containerID="f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3" Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.105045 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67878" event={"ID":"2d41d2f3-0d9f-453f-9417-14b5e56f2130","Type":"ContainerDied","Data":"695a5584ef4b59d3c2fd59a8a12230010d90c99715ad4abba2d284d2e10d0ece"} Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.146196 4904 scope.go:117] "RemoveContainer" containerID="f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3" Nov 21 13:50:26 crc kubenswrapper[4904]: E1121 13:50:26.146635 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3\": container with ID starting with f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3 not found: ID does not exist" containerID="f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3" Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.146707 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3"} err="failed to get container status \"f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3\": rpc error: code = NotFound desc = could not find container \"f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3\": container with ID starting with f7ab837495fdb9b2f9ae5b452a643ea6b1bd570091bf5621f0662ea40b2faea3 not found: ID does not exist" Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.154303 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-67878"] Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.162710 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-67878"] Nov 21 13:50:26 crc kubenswrapper[4904]: I1121 13:50:26.528600 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d41d2f3-0d9f-453f-9417-14b5e56f2130" path="/var/lib/kubelet/pods/2d41d2f3-0d9f-453f-9417-14b5e56f2130/volumes" Nov 21 13:50:28 crc kubenswrapper[4904]: I1121 13:50:28.113813 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:50:28 crc kubenswrapper[4904]: I1121 13:50:28.114187 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:50:31 crc kubenswrapper[4904]: I1121 13:50:31.256353 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:31 crc kubenswrapper[4904]: I1121 13:50:31.256873 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:31 crc kubenswrapper[4904]: I1121 13:50:31.292957 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:32 crc kubenswrapper[4904]: I1121 13:50:32.205254 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7gkpw" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.760392 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr"] Nov 21 13:50:42 crc kubenswrapper[4904]: E1121 13:50:42.761272 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d41d2f3-0d9f-453f-9417-14b5e56f2130" containerName="registry-server" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.761287 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d41d2f3-0d9f-453f-9417-14b5e56f2130" containerName="registry-server" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.761412 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d41d2f3-0d9f-453f-9417-14b5e56f2130" containerName="registry-server" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.762362 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.764606 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2b2lw" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.777329 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr"] Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.868908 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-util\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.869421 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvlwf\" (UniqueName: \"kubernetes.io/projected/def3a46f-3c06-4814-b9b9-a702bf229dcf-kube-api-access-xvlwf\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.869520 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-bundle\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.971571 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvlwf\" (UniqueName: \"kubernetes.io/projected/def3a46f-3c06-4814-b9b9-a702bf229dcf-kube-api-access-xvlwf\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.971716 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-bundle\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.971845 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-util\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.972228 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-bundle\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.972303 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-util\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:42 crc kubenswrapper[4904]: I1121 13:50:42.990879 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvlwf\" (UniqueName: \"kubernetes.io/projected/def3a46f-3c06-4814-b9b9-a702bf229dcf-kube-api-access-xvlwf\") pod \"1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:43 crc kubenswrapper[4904]: I1121 13:50:43.077517 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:43 crc kubenswrapper[4904]: I1121 13:50:43.497814 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr"] Nov 21 13:50:44 crc kubenswrapper[4904]: I1121 13:50:44.250453 4904 generic.go:334] "Generic (PLEG): container finished" podID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerID="3880eacbd16bfa25d6c47381d99b20a552a999494cb27f848707450026ed9be3" exitCode=0 Nov 21 13:50:44 crc kubenswrapper[4904]: I1121 13:50:44.250513 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" event={"ID":"def3a46f-3c06-4814-b9b9-a702bf229dcf","Type":"ContainerDied","Data":"3880eacbd16bfa25d6c47381d99b20a552a999494cb27f848707450026ed9be3"} Nov 21 13:50:44 crc kubenswrapper[4904]: I1121 13:50:44.250768 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" event={"ID":"def3a46f-3c06-4814-b9b9-a702bf229dcf","Type":"ContainerStarted","Data":"5df4e811da94c2a348af2d02757e94ce01657f165ae10d6fd721d1dbdf504e3b"} Nov 21 13:50:45 crc kubenswrapper[4904]: I1121 13:50:45.258812 4904 generic.go:334] "Generic (PLEG): container finished" podID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerID="9a0c519c386f05fe212c3113c50ce6ed0bfcef651fd326ecbdb07fc3aab9b970" exitCode=0 Nov 21 13:50:45 crc kubenswrapper[4904]: I1121 13:50:45.258897 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" event={"ID":"def3a46f-3c06-4814-b9b9-a702bf229dcf","Type":"ContainerDied","Data":"9a0c519c386f05fe212c3113c50ce6ed0bfcef651fd326ecbdb07fc3aab9b970"} Nov 21 13:50:46 crc kubenswrapper[4904]: I1121 13:50:46.268311 4904 generic.go:334] "Generic (PLEG): container finished" podID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerID="3022d6cd782536a5d0bee86dc074a0d8b694855454856d3cf8c1657bd7ef6561" exitCode=0 Nov 21 13:50:46 crc kubenswrapper[4904]: I1121 13:50:46.268378 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" event={"ID":"def3a46f-3c06-4814-b9b9-a702bf229dcf","Type":"ContainerDied","Data":"3022d6cd782536a5d0bee86dc074a0d8b694855454856d3cf8c1657bd7ef6561"} Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.646949 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.770276 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-bundle\") pod \"def3a46f-3c06-4814-b9b9-a702bf229dcf\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.770335 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvlwf\" (UniqueName: \"kubernetes.io/projected/def3a46f-3c06-4814-b9b9-a702bf229dcf-kube-api-access-xvlwf\") pod \"def3a46f-3c06-4814-b9b9-a702bf229dcf\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.770455 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-util\") pod \"def3a46f-3c06-4814-b9b9-a702bf229dcf\" (UID: \"def3a46f-3c06-4814-b9b9-a702bf229dcf\") " Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.771309 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-bundle" (OuterVolumeSpecName: "bundle") pod "def3a46f-3c06-4814-b9b9-a702bf229dcf" (UID: "def3a46f-3c06-4814-b9b9-a702bf229dcf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.777885 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def3a46f-3c06-4814-b9b9-a702bf229dcf-kube-api-access-xvlwf" (OuterVolumeSpecName: "kube-api-access-xvlwf") pod "def3a46f-3c06-4814-b9b9-a702bf229dcf" (UID: "def3a46f-3c06-4814-b9b9-a702bf229dcf"). InnerVolumeSpecName "kube-api-access-xvlwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.784311 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-util" (OuterVolumeSpecName: "util") pod "def3a46f-3c06-4814-b9b9-a702bf229dcf" (UID: "def3a46f-3c06-4814-b9b9-a702bf229dcf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.872735 4904 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.872791 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvlwf\" (UniqueName: \"kubernetes.io/projected/def3a46f-3c06-4814-b9b9-a702bf229dcf-kube-api-access-xvlwf\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:47 crc kubenswrapper[4904]: I1121 13:50:47.872808 4904 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/def3a46f-3c06-4814-b9b9-a702bf229dcf-util\") on node \"crc\" DevicePath \"\"" Nov 21 13:50:48 crc kubenswrapper[4904]: I1121 13:50:48.289943 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" event={"ID":"def3a46f-3c06-4814-b9b9-a702bf229dcf","Type":"ContainerDied","Data":"5df4e811da94c2a348af2d02757e94ce01657f165ae10d6fd721d1dbdf504e3b"} Nov 21 13:50:48 crc kubenswrapper[4904]: I1121 13:50:48.289992 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df4e811da94c2a348af2d02757e94ce01657f165ae10d6fd721d1dbdf504e3b" Nov 21 13:50:48 crc kubenswrapper[4904]: I1121 13:50:48.290057 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.503403 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f"] Nov 21 13:50:55 crc kubenswrapper[4904]: E1121 13:50:55.504219 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="util" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.504232 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="util" Nov 21 13:50:55 crc kubenswrapper[4904]: E1121 13:50:55.504251 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="pull" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.504257 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="pull" Nov 21 13:50:55 crc kubenswrapper[4904]: E1121 13:50:55.504271 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="extract" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.504277 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="extract" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.504400 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="def3a46f-3c06-4814-b9b9-a702bf229dcf" containerName="extract" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.504911 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.509231 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-27mkq" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.534270 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f"] Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.620559 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwt54\" (UniqueName: \"kubernetes.io/projected/760055d2-e646-4466-a667-90292a69546a-kube-api-access-wwt54\") pod \"openstack-operator-controller-operator-7bc9ddc77b-wmm4f\" (UID: \"760055d2-e646-4466-a667-90292a69546a\") " pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.722835 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwt54\" (UniqueName: \"kubernetes.io/projected/760055d2-e646-4466-a667-90292a69546a-kube-api-access-wwt54\") pod \"openstack-operator-controller-operator-7bc9ddc77b-wmm4f\" (UID: \"760055d2-e646-4466-a667-90292a69546a\") " pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.762849 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwt54\" (UniqueName: \"kubernetes.io/projected/760055d2-e646-4466-a667-90292a69546a-kube-api-access-wwt54\") pod \"openstack-operator-controller-operator-7bc9ddc77b-wmm4f\" (UID: \"760055d2-e646-4466-a667-90292a69546a\") " pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:50:55 crc kubenswrapper[4904]: I1121 13:50:55.825995 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:50:56 crc kubenswrapper[4904]: I1121 13:50:56.316158 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f"] Nov 21 13:50:56 crc kubenswrapper[4904]: I1121 13:50:56.350105 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" event={"ID":"760055d2-e646-4466-a667-90292a69546a","Type":"ContainerStarted","Data":"f525ccb6641ae5c0bbe7905155edcfa13bfd1c32d9af46ea92de1cdb56e2246f"} Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.113089 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.113591 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.113673 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.114604 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7fd2b100d2e7ad73c083b4c79b52506999f9dce6592051de1411c6354bd5da0"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.114699 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://d7fd2b100d2e7ad73c083b4c79b52506999f9dce6592051de1411c6354bd5da0" gracePeriod=600 Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.374380 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="d7fd2b100d2e7ad73c083b4c79b52506999f9dce6592051de1411c6354bd5da0" exitCode=0 Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.374438 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"d7fd2b100d2e7ad73c083b4c79b52506999f9dce6592051de1411c6354bd5da0"} Nov 21 13:50:58 crc kubenswrapper[4904]: I1121 13:50:58.374490 4904 scope.go:117] "RemoveContainer" containerID="be6a3c0a99c9505797540ba9588fe9f6a753a8471c941586b86762324c656b9e" Nov 21 13:51:01 crc kubenswrapper[4904]: I1121 13:51:01.409488 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" event={"ID":"760055d2-e646-4466-a667-90292a69546a","Type":"ContainerStarted","Data":"7c3ac3132009ab4ef480d8293204bd61cfdc5558949b4d327110c86ea1d9dc7d"} Nov 21 13:51:01 crc kubenswrapper[4904]: I1121 13:51:01.410262 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:51:01 crc kubenswrapper[4904]: I1121 13:51:01.412610 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"dcc2d7cbaa7ef87e0c42834de16075a7ebf9ca0b1a68c156ba86b82f49b3f653"} Nov 21 13:51:01 crc kubenswrapper[4904]: I1121 13:51:01.445255 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" podStartSLOduration=2.157357175 podStartE2EDuration="6.445226605s" podCreationTimestamp="2025-11-21 13:50:55 +0000 UTC" firstStartedPulling="2025-11-21 13:50:56.335388865 +0000 UTC m=+1130.456921417" lastFinishedPulling="2025-11-21 13:51:00.623258305 +0000 UTC m=+1134.744790847" observedRunningTime="2025-11-21 13:51:01.435987191 +0000 UTC m=+1135.557519763" watchObservedRunningTime="2025-11-21 13:51:01.445226605 +0000 UTC m=+1135.566759157" Nov 21 13:51:05 crc kubenswrapper[4904]: I1121 13:51:05.829737 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7bc9ddc77b-wmm4f" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.394852 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.396943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.399036 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wmq6t" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.411945 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.413565 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.415791 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-kxhqn" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.420491 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.440746 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspnw\" (UniqueName: \"kubernetes.io/projected/4b290147-91ef-4734-961b-b61487960c33-kube-api-access-kspnw\") pod \"cinder-operator-controller-manager-79856dc55c-hm8cc\" (UID: \"4b290147-91ef-4734-961b-b61487960c33\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.440886 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs94n\" (UniqueName: \"kubernetes.io/projected/50f3313d-ff99-4b0e-931a-c2a774375ae3-kube-api-access-xs94n\") pod \"barbican-operator-controller-manager-86dc4d89c8-j9dcf\" (UID: \"50f3313d-ff99-4b0e-931a-c2a774375ae3\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.446731 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.448841 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.454724 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8gqzb" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.478810 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.506720 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.507945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.511078 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2drsh" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.524142 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.540149 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.542163 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.547062 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvhs\" (UniqueName: \"kubernetes.io/projected/b91d5a1c-a2d5-4875-a23a-e43ae7f18937-kube-api-access-5fvhs\") pod \"glance-operator-controller-manager-68b95954c9-bnt22\" (UID: \"b91d5a1c-a2d5-4875-a23a-e43ae7f18937\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.547114 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlffj\" (UniqueName: \"kubernetes.io/projected/14e3fbea-6dc2-44a4-81be-dfda27a6cdd8-kube-api-access-xlffj\") pod \"designate-operator-controller-manager-7d695c9b56-lnp8w\" (UID: \"14e3fbea-6dc2-44a4-81be-dfda27a6cdd8\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.547180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs94n\" (UniqueName: \"kubernetes.io/projected/50f3313d-ff99-4b0e-931a-c2a774375ae3-kube-api-access-xs94n\") pod \"barbican-operator-controller-manager-86dc4d89c8-j9dcf\" (UID: \"50f3313d-ff99-4b0e-931a-c2a774375ae3\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.547243 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kspnw\" (UniqueName: \"kubernetes.io/projected/4b290147-91ef-4734-961b-b61487960c33-kube-api-access-kspnw\") pod \"cinder-operator-controller-manager-79856dc55c-hm8cc\" (UID: \"4b290147-91ef-4734-961b-b61487960c33\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.547309 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bbzj\" (UniqueName: \"kubernetes.io/projected/55b86375-94e3-4c12-96b9-c5f581b3d8f3-kube-api-access-2bbzj\") pod \"heat-operator-controller-manager-774b86978c-ncvqt\" (UID: \"55b86375-94e3-4c12-96b9-c5f581b3d8f3\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.554114 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g44tn" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.556839 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.584458 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.585947 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.593709 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vpqqg" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.607557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kspnw\" (UniqueName: \"kubernetes.io/projected/4b290147-91ef-4734-961b-b61487960c33-kube-api-access-kspnw\") pod \"cinder-operator-controller-manager-79856dc55c-hm8cc\" (UID: \"4b290147-91ef-4734-961b-b61487960c33\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.611177 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.622799 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs94n\" (UniqueName: \"kubernetes.io/projected/50f3313d-ff99-4b0e-931a-c2a774375ae3-kube-api-access-xs94n\") pod \"barbican-operator-controller-manager-86dc4d89c8-j9dcf\" (UID: \"50f3313d-ff99-4b0e-931a-c2a774375ae3\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.646230 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.648546 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bbzj\" (UniqueName: \"kubernetes.io/projected/55b86375-94e3-4c12-96b9-c5f581b3d8f3-kube-api-access-2bbzj\") pod \"heat-operator-controller-manager-774b86978c-ncvqt\" (UID: \"55b86375-94e3-4c12-96b9-c5f581b3d8f3\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.648616 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvhs\" (UniqueName: \"kubernetes.io/projected/b91d5a1c-a2d5-4875-a23a-e43ae7f18937-kube-api-access-5fvhs\") pod \"glance-operator-controller-manager-68b95954c9-bnt22\" (UID: \"b91d5a1c-a2d5-4875-a23a-e43ae7f18937\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.648668 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlffj\" (UniqueName: \"kubernetes.io/projected/14e3fbea-6dc2-44a4-81be-dfda27a6cdd8-kube-api-access-xlffj\") pod \"designate-operator-controller-manager-7d695c9b56-lnp8w\" (UID: \"14e3fbea-6dc2-44a4-81be-dfda27a6cdd8\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.661740 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.663302 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.670119 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-p92cw" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.670371 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.700248 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvhs\" (UniqueName: \"kubernetes.io/projected/b91d5a1c-a2d5-4875-a23a-e43ae7f18937-kube-api-access-5fvhs\") pod \"glance-operator-controller-manager-68b95954c9-bnt22\" (UID: \"b91d5a1c-a2d5-4875-a23a-e43ae7f18937\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.703916 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.726143 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bbzj\" (UniqueName: \"kubernetes.io/projected/55b86375-94e3-4c12-96b9-c5f581b3d8f3-kube-api-access-2bbzj\") pod \"heat-operator-controller-manager-774b86978c-ncvqt\" (UID: \"55b86375-94e3-4c12-96b9-c5f581b3d8f3\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.726646 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.737466 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlffj\" (UniqueName: \"kubernetes.io/projected/14e3fbea-6dc2-44a4-81be-dfda27a6cdd8-kube-api-access-xlffj\") pod \"designate-operator-controller-manager-7d695c9b56-lnp8w\" (UID: \"14e3fbea-6dc2-44a4-81be-dfda27a6cdd8\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.751582 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxwq\" (UniqueName: \"kubernetes.io/projected/d81ae352-08d2-433c-b883-deeb78888945-kube-api-access-2wxwq\") pod \"horizon-operator-controller-manager-68c9694994-jp6bp\" (UID: \"d81ae352-08d2-433c-b883-deeb78888945\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.771203 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.771708 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.781123 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.790436 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.806238 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-996sb" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.807161 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.808747 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.820199 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-znvvm" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.832789 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.840707 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.853630 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64f35f86-9389-4506-ad53-d42eec926447-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.853711 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxwq\" (UniqueName: \"kubernetes.io/projected/d81ae352-08d2-433c-b883-deeb78888945-kube-api-access-2wxwq\") pod \"horizon-operator-controller-manager-68c9694994-jp6bp\" (UID: \"d81ae352-08d2-433c-b883-deeb78888945\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.853812 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppl4z\" (UniqueName: \"kubernetes.io/projected/64f35f86-9389-4506-ad53-d42eec926447-kube-api-access-ppl4z\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.857419 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.877347 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.922287 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv"] Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.932074 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxwq\" (UniqueName: \"kubernetes.io/projected/d81ae352-08d2-433c-b883-deeb78888945-kube-api-access-2wxwq\") pod \"horizon-operator-controller-manager-68c9694994-jp6bp\" (UID: \"d81ae352-08d2-433c-b883-deeb78888945\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.959639 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.966631 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppl4z\" (UniqueName: \"kubernetes.io/projected/64f35f86-9389-4506-ad53-d42eec926447-kube-api-access-ppl4z\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.980238 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdll\" (UniqueName: \"kubernetes.io/projected/37bdccc0-c16d-4523-94d7-978d8313ca7f-kube-api-access-vcdll\") pod \"keystone-operator-controller-manager-748dc6576f-nkqwb\" (UID: \"37bdccc0-c16d-4523-94d7-978d8313ca7f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.980305 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnqm\" (UniqueName: \"kubernetes.io/projected/15e30337-bd79-4d01-b3ab-2177d3c0609b-kube-api-access-sbnqm\") pod \"ironic-operator-controller-manager-5bfcdc958c-wvfgt\" (UID: \"15e30337-bd79-4d01-b3ab-2177d3c0609b\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:51:35 crc kubenswrapper[4904]: I1121 13:51:35.980480 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64f35f86-9389-4506-ad53-d42eec926447-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.002577 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-qg555" Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.038465 4904 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.040777 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64f35f86-9389-4506-ad53-d42eec926447-cert podName:64f35f86-9389-4506-ad53-d42eec926447 nodeName:}" failed. No retries permitted until 2025-11-21 13:51:36.540735638 +0000 UTC m=+1170.662268190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64f35f86-9389-4506-ad53-d42eec926447-cert") pod "infra-operator-controller-manager-d5cc86f4b-h78bp" (UID: "64f35f86-9389-4506-ad53-d42eec926447") : secret "infra-operator-webhook-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.058581 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppl4z\" (UniqueName: \"kubernetes.io/projected/64f35f86-9389-4506-ad53-d42eec926447-kube-api-access-ppl4z\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.083393 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.086399 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.115229 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.116794 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.119238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdll\" (UniqueName: \"kubernetes.io/projected/37bdccc0-c16d-4523-94d7-978d8313ca7f-kube-api-access-vcdll\") pod \"keystone-operator-controller-manager-748dc6576f-nkqwb\" (UID: \"37bdccc0-c16d-4523-94d7-978d8313ca7f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.119291 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnqm\" (UniqueName: \"kubernetes.io/projected/15e30337-bd79-4d01-b3ab-2177d3c0609b-kube-api-access-sbnqm\") pod \"ironic-operator-controller-manager-5bfcdc958c-wvfgt\" (UID: \"15e30337-bd79-4d01-b3ab-2177d3c0609b\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.119346 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrxk\" (UniqueName: \"kubernetes.io/projected/98bcfebc-c45f-4a2e-a21f-9c8cf892898c-kube-api-access-pcrxk\") pod \"manila-operator-controller-manager-58bb8d67cc-xr5mv\" (UID: \"98bcfebc-c45f-4a2e-a21f-9c8cf892898c\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.140780 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-stqgm" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.142120 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.155603 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdll\" (UniqueName: \"kubernetes.io/projected/37bdccc0-c16d-4523-94d7-978d8313ca7f-kube-api-access-vcdll\") pod \"keystone-operator-controller-manager-748dc6576f-nkqwb\" (UID: \"37bdccc0-c16d-4523-94d7-978d8313ca7f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.157389 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnqm\" (UniqueName: \"kubernetes.io/projected/15e30337-bd79-4d01-b3ab-2177d3c0609b-kube-api-access-sbnqm\") pod \"ironic-operator-controller-manager-5bfcdc958c-wvfgt\" (UID: \"15e30337-bd79-4d01-b3ab-2177d3c0609b\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.165834 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.175986 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.177554 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.181617 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vt6wm" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.221360 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcrxk\" (UniqueName: \"kubernetes.io/projected/98bcfebc-c45f-4a2e-a21f-9c8cf892898c-kube-api-access-pcrxk\") pod \"manila-operator-controller-manager-58bb8d67cc-xr5mv\" (UID: \"98bcfebc-c45f-4a2e-a21f-9c8cf892898c\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.221506 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmrs\" (UniqueName: \"kubernetes.io/projected/72dacec1-d81b-46df-acd2-962095286389-kube-api-access-dmmrs\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-tg85w\" (UID: \"72dacec1-d81b-46df-acd2-962095286389\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.227369 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.232491 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.235068 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-795hv" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.246923 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcrxk\" (UniqueName: \"kubernetes.io/projected/98bcfebc-c45f-4a2e-a21f-9c8cf892898c-kube-api-access-pcrxk\") pod \"manila-operator-controller-manager-58bb8d67cc-xr5mv\" (UID: \"98bcfebc-c45f-4a2e-a21f-9c8cf892898c\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.290520 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.327178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmrs\" (UniqueName: \"kubernetes.io/projected/72dacec1-d81b-46df-acd2-962095286389-kube-api-access-dmmrs\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-tg85w\" (UID: \"72dacec1-d81b-46df-acd2-962095286389\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.327254 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95zxg\" (UniqueName: \"kubernetes.io/projected/4dc68816-ed56-4e8b-a41b-b91868bc57d3-kube-api-access-95zxg\") pod \"nova-operator-controller-manager-79556f57fc-n2t2q\" (UID: \"4dc68816-ed56-4e8b-a41b-b91868bc57d3\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.327282 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9hv\" (UniqueName: \"kubernetes.io/projected/301f4657-8519-4071-a82b-b35f80739372-kube-api-access-zd9hv\") pod \"neutron-operator-controller-manager-7c57c8bbc4-b8xrh\" (UID: \"301f4657-8519-4071-a82b-b35f80739372\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.348429 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.349837 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.351362 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-mcxvl" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.357079 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.372715 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmrs\" (UniqueName: \"kubernetes.io/projected/72dacec1-d81b-46df-acd2-962095286389-kube-api-access-dmmrs\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-tg85w\" (UID: \"72dacec1-d81b-46df-acd2-962095286389\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.390727 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.405352 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.406943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.409500 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tq42x" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.415212 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.417257 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.420936 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.427189 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.428117 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-smxf6" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.428609 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95zxg\" (UniqueName: \"kubernetes.io/projected/4dc68816-ed56-4e8b-a41b-b91868bc57d3-kube-api-access-95zxg\") pod \"nova-operator-controller-manager-79556f57fc-n2t2q\" (UID: \"4dc68816-ed56-4e8b-a41b-b91868bc57d3\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.428668 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9hv\" (UniqueName: \"kubernetes.io/projected/301f4657-8519-4071-a82b-b35f80739372-kube-api-access-zd9hv\") pod \"neutron-operator-controller-manager-7c57c8bbc4-b8xrh\" (UID: \"301f4657-8519-4071-a82b-b35f80739372\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.428729 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8fqd\" (UniqueName: \"kubernetes.io/projected/c8300ff9-666c-4d85-bd43-120d41529215-kube-api-access-b8fqd\") pod \"octavia-operator-controller-manager-fd75fd47d-6bxqf\" (UID: \"c8300ff9-666c-4d85-bd43-120d41529215\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.439837 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.446345 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.447228 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.447435 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.452124 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.453163 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.461501 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nlk6b" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.465795 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9hv\" (UniqueName: \"kubernetes.io/projected/301f4657-8519-4071-a82b-b35f80739372-kube-api-access-zd9hv\") pod \"neutron-operator-controller-manager-7c57c8bbc4-b8xrh\" (UID: \"301f4657-8519-4071-a82b-b35f80739372\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.471269 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95zxg\" (UniqueName: \"kubernetes.io/projected/4dc68816-ed56-4e8b-a41b-b91868bc57d3-kube-api-access-95zxg\") pod \"nova-operator-controller-manager-79556f57fc-n2t2q\" (UID: \"4dc68816-ed56-4e8b-a41b-b91868bc57d3\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.472117 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.473693 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.480039 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9t5dq" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.495825 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.505595 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.507452 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.510498 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-qk7tr" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.523793 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.530373 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtlk4\" (UniqueName: \"kubernetes.io/projected/887b5387-ce64-43b1-8755-2c401719a2d6-kube-api-access-gtlk4\") pod \"ovn-operator-controller-manager-66cf5c67ff-qmmmc\" (UID: \"887b5387-ce64-43b1-8755-2c401719a2d6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.530579 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.530576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8fqd\" (UniqueName: \"kubernetes.io/projected/c8300ff9-666c-4d85-bd43-120d41529215-kube-api-access-b8fqd\") pod \"octavia-operator-controller-manager-fd75fd47d-6bxqf\" (UID: \"c8300ff9-666c-4d85-bd43-120d41529215\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.530826 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpswv\" (UniqueName: \"kubernetes.io/projected/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-kube-api-access-qpswv\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.531116 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.531204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2ft\" (UniqueName: \"kubernetes.io/projected/27625471-8f27-449b-a245-558e079a38ab-kube-api-access-7n2ft\") pod \"placement-operator-controller-manager-5db546f9d9-lrfmt\" (UID: \"27625471-8f27-449b-a245-558e079a38ab\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.543251 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dkvkm" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.544334 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.561059 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8fqd\" (UniqueName: \"kubernetes.io/projected/c8300ff9-666c-4d85-bd43-120d41529215-kube-api-access-b8fqd\") pod \"octavia-operator-controller-manager-fd75fd47d-6bxqf\" (UID: \"c8300ff9-666c-4d85-bd43-120d41529215\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.566207 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.569365 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.576675 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.586803 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.600342 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-57jrf"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.602246 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.605041 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-57jrf"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.630476 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jl6kv" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.654105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64f35f86-9389-4506-ad53-d42eec926447-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.654509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpswv\" (UniqueName: \"kubernetes.io/projected/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-kube-api-access-qpswv\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.654674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.654744 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcnnj\" (UniqueName: \"kubernetes.io/projected/90ef3fb1-0f63-4fd0-94ef-2fefce011d23-kube-api-access-wcnnj\") pod \"telemetry-operator-controller-manager-7fc59d4bfd-vzk7g\" (UID: \"90ef3fb1-0f63-4fd0-94ef-2fefce011d23\") " pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.654775 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2ft\" (UniqueName: \"kubernetes.io/projected/27625471-8f27-449b-a245-558e079a38ab-kube-api-access-7n2ft\") pod \"placement-operator-controller-manager-5db546f9d9-lrfmt\" (UID: \"27625471-8f27-449b-a245-558e079a38ab\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.654881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmdc\" (UniqueName: \"kubernetes.io/projected/756ba318-ec48-4012-9d9d-108c3f1fad3c-kube-api-access-7fmdc\") pod \"swift-operator-controller-manager-6fdc4fcf86-kpwdb\" (UID: \"756ba318-ec48-4012-9d9d-108c3f1fad3c\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.655013 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtlk4\" (UniqueName: \"kubernetes.io/projected/887b5387-ce64-43b1-8755-2c401719a2d6-kube-api-access-gtlk4\") pod \"ovn-operator-controller-manager-66cf5c67ff-qmmmc\" (UID: \"887b5387-ce64-43b1-8755-2c401719a2d6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.655038 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glq6n\" (UniqueName: \"kubernetes.io/projected/a8dc5688-31cd-412c-91e9-3ae137d2a20a-kube-api-access-glq6n\") pod \"test-operator-controller-manager-5cb74df96-vtkhq\" (UID: \"a8dc5688-31cd-412c-91e9-3ae137d2a20a\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.656774 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.656947 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert podName:ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd nodeName:}" failed. No retries permitted until 2025-11-21 13:51:37.156867814 +0000 UTC m=+1171.278400366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" (UID: "ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.685988 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtlk4\" (UniqueName: \"kubernetes.io/projected/887b5387-ce64-43b1-8755-2c401719a2d6-kube-api-access-gtlk4\") pod \"ovn-operator-controller-manager-66cf5c67ff-qmmmc\" (UID: \"887b5387-ce64-43b1-8755-2c401719a2d6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.686109 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpswv\" (UniqueName: \"kubernetes.io/projected/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-kube-api-access-qpswv\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.712815 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2ft\" (UniqueName: \"kubernetes.io/projected/27625471-8f27-449b-a245-558e079a38ab-kube-api-access-7n2ft\") pod \"placement-operator-controller-manager-5db546f9d9-lrfmt\" (UID: \"27625471-8f27-449b-a245-558e079a38ab\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.716734 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.718504 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.724328 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.724811 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.724627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64f35f86-9389-4506-ad53-d42eec926447-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-h78bp\" (UID: \"64f35f86-9389-4506-ad53-d42eec926447\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.725868 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jq59d" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.738489 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.741166 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.762421 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.763736 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.766352 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zhvn6" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.766693 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" event={"ID":"50f3313d-ff99-4b0e-931a-c2a774375ae3","Type":"ContainerStarted","Data":"d63859d3953261a6e366c4675e33b72265e3f516492bd6bca740029b8f3ed8f0"} Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.768179 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fmdc\" (UniqueName: \"kubernetes.io/projected/756ba318-ec48-4012-9d9d-108c3f1fad3c-kube-api-access-7fmdc\") pod \"swift-operator-controller-manager-6fdc4fcf86-kpwdb\" (UID: \"756ba318-ec48-4012-9d9d-108c3f1fad3c\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.768331 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glq6n\" (UniqueName: \"kubernetes.io/projected/a8dc5688-31cd-412c-91e9-3ae137d2a20a-kube-api-access-glq6n\") pod \"test-operator-controller-manager-5cb74df96-vtkhq\" (UID: \"a8dc5688-31cd-412c-91e9-3ae137d2a20a\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.768431 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstz4\" (UniqueName: \"kubernetes.io/projected/fb4141a1-4768-45ae-a8e8-ec1d1c01db4e-kube-api-access-lstz4\") pod \"watcher-operator-controller-manager-864885998-57jrf\" (UID: \"fb4141a1-4768-45ae-a8e8-ec1d1c01db4e\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.768575 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcnnj\" (UniqueName: \"kubernetes.io/projected/90ef3fb1-0f63-4fd0-94ef-2fefce011d23-kube-api-access-wcnnj\") pod \"telemetry-operator-controller-manager-7fc59d4bfd-vzk7g\" (UID: \"90ef3fb1-0f63-4fd0-94ef-2fefce011d23\") " pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.769700 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.772176 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.812512 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.818948 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcnnj\" (UniqueName: \"kubernetes.io/projected/90ef3fb1-0f63-4fd0-94ef-2fefce011d23-kube-api-access-wcnnj\") pod \"telemetry-operator-controller-manager-7fc59d4bfd-vzk7g\" (UID: \"90ef3fb1-0f63-4fd0-94ef-2fefce011d23\") " pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.819348 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fmdc\" (UniqueName: \"kubernetes.io/projected/756ba318-ec48-4012-9d9d-108c3f1fad3c-kube-api-access-7fmdc\") pod \"swift-operator-controller-manager-6fdc4fcf86-kpwdb\" (UID: \"756ba318-ec48-4012-9d9d-108c3f1fad3c\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.826980 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glq6n\" (UniqueName: \"kubernetes.io/projected/a8dc5688-31cd-412c-91e9-3ae137d2a20a-kube-api-access-glq6n\") pod \"test-operator-controller-manager-5cb74df96-vtkhq\" (UID: \"a8dc5688-31cd-412c-91e9-3ae137d2a20a\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.828998 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.856029 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.865599 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.870931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9xq\" (UniqueName: \"kubernetes.io/projected/2e76f101-21a3-4b78-970e-55a016ee2a40-kube-api-access-lj9xq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5lzvr\" (UID: \"2e76f101-21a3-4b78-970e-55a016ee2a40\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.870985 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjk2z\" (UniqueName: \"kubernetes.io/projected/80a18488-07da-4b66-b164-40f7d7027b5b-kube-api-access-qjk2z\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.871029 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstz4\" (UniqueName: \"kubernetes.io/projected/fb4141a1-4768-45ae-a8e8-ec1d1c01db4e-kube-api-access-lstz4\") pod \"watcher-operator-controller-manager-864885998-57jrf\" (UID: \"fb4141a1-4768-45ae-a8e8-ec1d1c01db4e\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.871138 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.871165 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.889592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstz4\" (UniqueName: \"kubernetes.io/projected/fb4141a1-4768-45ae-a8e8-ec1d1c01db4e-kube-api-access-lstz4\") pod \"watcher-operator-controller-manager-864885998-57jrf\" (UID: \"fb4141a1-4768-45ae-a8e8-ec1d1c01db4e\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:51:36 crc kubenswrapper[4904]: W1121 13:51:36.931507 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b290147_91ef_4734_961b_b61487960c33.slice/crio-665951e3e60c4e476a3b44de0e629e3f7721742f7f41a22c82f44d6de1828436 WatchSource:0}: Error finding container 665951e3e60c4e476a3b44de0e629e3f7721742f7f41a22c82f44d6de1828436: Status 404 returned error can't find the container with id 665951e3e60c4e476a3b44de0e629e3f7721742f7f41a22c82f44d6de1828436 Nov 21 13:51:36 crc kubenswrapper[4904]: W1121 13:51:36.933015 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55b86375_94e3_4c12_96b9_c5f581b3d8f3.slice/crio-269c460ea57809bdece2271649ce99012268b67ad4cbe86a321035516e714188 WatchSource:0}: Error finding container 269c460ea57809bdece2271649ce99012268b67ad4cbe86a321035516e714188: Status 404 returned error can't find the container with id 269c460ea57809bdece2271649ce99012268b67ad4cbe86a321035516e714188 Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.938590 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.941235 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.946024 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc"] Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.973096 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.973145 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.973183 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj9xq\" (UniqueName: \"kubernetes.io/projected/2e76f101-21a3-4b78-970e-55a016ee2a40-kube-api-access-lj9xq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5lzvr\" (UID: \"2e76f101-21a3-4b78-970e-55a016ee2a40\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.973215 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjk2z\" (UniqueName: \"kubernetes.io/projected/80a18488-07da-4b66-b164-40f7d7027b5b-kube-api-access-qjk2z\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.974006 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.974066 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs podName:80a18488-07da-4b66-b164-40f7d7027b5b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:37.474049013 +0000 UTC m=+1171.595581565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs") pod "openstack-operator-controller-manager-79fb5496bb-zp56v" (UID: "80a18488-07da-4b66-b164-40f7d7027b5b") : secret "metrics-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.974303 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: E1121 13:51:36.974405 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs podName:80a18488-07da-4b66-b164-40f7d7027b5b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:37.474373751 +0000 UTC m=+1171.595906483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs") pod "openstack-operator-controller-manager-79fb5496bb-zp56v" (UID: "80a18488-07da-4b66-b164-40f7d7027b5b") : secret "webhook-server-cert" not found Nov 21 13:51:36 crc kubenswrapper[4904]: I1121 13:51:36.994436 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj9xq\" (UniqueName: \"kubernetes.io/projected/2e76f101-21a3-4b78-970e-55a016ee2a40-kube-api-access-lj9xq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5lzvr\" (UID: \"2e76f101-21a3-4b78-970e-55a016ee2a40\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.002863 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjk2z\" (UniqueName: \"kubernetes.io/projected/80a18488-07da-4b66-b164-40f7d7027b5b-kube-api-access-qjk2z\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.003419 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.062531 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.177835 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:37 crc kubenswrapper[4904]: E1121 13:51:37.178042 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:51:37 crc kubenswrapper[4904]: E1121 13:51:37.178102 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert podName:ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd nodeName:}" failed. No retries permitted until 2025-11-21 13:51:38.178081447 +0000 UTC m=+1172.299613999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" (UID: "ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.269433 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.316937 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.336258 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.346431 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.360238 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.424898 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.431142 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt"] Nov 21 13:51:37 crc kubenswrapper[4904]: W1121 13:51:37.441998 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e30337_bd79_4d01_b3ab_2177d3c0609b.slice/crio-e9f73b91ced234d5ef76c04f1fdaccd1702f900b884e93340a999718d9271c64 WatchSource:0}: Error finding container e9f73b91ced234d5ef76c04f1fdaccd1702f900b884e93340a999718d9271c64: Status 404 returned error can't find the container with id e9f73b91ced234d5ef76c04f1fdaccd1702f900b884e93340a999718d9271c64 Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.483134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.483287 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:37 crc kubenswrapper[4904]: E1121 13:51:37.483342 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 13:51:37 crc kubenswrapper[4904]: E1121 13:51:37.483467 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs podName:80a18488-07da-4b66-b164-40f7d7027b5b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:38.483410352 +0000 UTC m=+1172.604942904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs") pod "openstack-operator-controller-manager-79fb5496bb-zp56v" (UID: "80a18488-07da-4b66-b164-40f7d7027b5b") : secret "webhook-server-cert" not found Nov 21 13:51:37 crc kubenswrapper[4904]: E1121 13:51:37.483865 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 21 13:51:37 crc kubenswrapper[4904]: E1121 13:51:37.483960 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs podName:80a18488-07da-4b66-b164-40f7d7027b5b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:38.483949675 +0000 UTC m=+1172.605482227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs") pod "openstack-operator-controller-manager-79fb5496bb-zp56v" (UID: "80a18488-07da-4b66-b164-40f7d7027b5b") : secret "metrics-server-cert" not found Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.797101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" event={"ID":"98bcfebc-c45f-4a2e-a21f-9c8cf892898c","Type":"ContainerStarted","Data":"6fab038de635135652cc6f17259e985a065a996fed0a5bf297c15b77a2d2af94"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.801437 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" event={"ID":"d81ae352-08d2-433c-b883-deeb78888945","Type":"ContainerStarted","Data":"848502048f6979e76127ced76307b80343046dbeff760dc91191f52c2f136d42"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.813127 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" event={"ID":"55b86375-94e3-4c12-96b9-c5f581b3d8f3","Type":"ContainerStarted","Data":"269c460ea57809bdece2271649ce99012268b67ad4cbe86a321035516e714188"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.821840 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" event={"ID":"b91d5a1c-a2d5-4875-a23a-e43ae7f18937","Type":"ContainerStarted","Data":"3266e11eb79a8f86bd481a6bdb253cfcee1fb15e6b4fc5098449a9f033c6652a"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.824829 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.832468 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.837675 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.843895 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.861853 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" event={"ID":"37bdccc0-c16d-4523-94d7-978d8313ca7f","Type":"ContainerStarted","Data":"ccfe4e18d36027f77429766bde3b4dda6a52646ac00f1c64b9d6b1a6b6064d53"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.863399 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" event={"ID":"15e30337-bd79-4d01-b3ab-2177d3c0609b","Type":"ContainerStarted","Data":"e9f73b91ced234d5ef76c04f1fdaccd1702f900b884e93340a999718d9271c64"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.864026 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf"] Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.868303 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" event={"ID":"14e3fbea-6dc2-44a4-81be-dfda27a6cdd8","Type":"ContainerStarted","Data":"6bb99558a1f74ec2517c7dfef5f2144f87d118357deb07636b446d5f3bcbe01a"} Nov 21 13:51:37 crc kubenswrapper[4904]: I1121 13:51:37.888376 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" event={"ID":"4b290147-91ef-4734-961b-b61487960c33","Type":"ContainerStarted","Data":"665951e3e60c4e476a3b44de0e629e3f7721742f7f41a22c82f44d6de1828436"} Nov 21 13:51:37 crc kubenswrapper[4904]: W1121 13:51:37.917010 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod887b5387_ce64_43b1_8755_2c401719a2d6.slice/crio-be923d111039053c25d5713ba963e154612382486a8b4186ff6bc0a04d6eb4f7 WatchSource:0}: Error finding container be923d111039053c25d5713ba963e154612382486a8b4186ff6bc0a04d6eb4f7: Status 404 returned error can't find the container with id be923d111039053c25d5713ba963e154612382486a8b4186ff6bc0a04d6eb4f7 Nov 21 13:51:37 crc kubenswrapper[4904]: W1121 13:51:37.921118 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72dacec1_d81b_46df_acd2_962095286389.slice/crio-a62541226c64ad1d08dc30e87dcaae876fd196b97f9f7ba62d9f45eaf9c1f4b6 WatchSource:0}: Error finding container a62541226c64ad1d08dc30e87dcaae876fd196b97f9f7ba62d9f45eaf9c1f4b6: Status 404 returned error can't find the container with id a62541226c64ad1d08dc30e87dcaae876fd196b97f9f7ba62d9f45eaf9c1f4b6 Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.065583 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt"] Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.227791 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-57jrf"] Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.233294 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.233527 4904 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.233589 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert podName:ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd nodeName:}" failed. No retries permitted until 2025-11-21 13:51:40.233567645 +0000 UTC m=+1174.355100197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" (UID: "ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.323223 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lstz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-57jrf_openstack-operators(fb4141a1-4768-45ae-a8e8-ec1d1c01db4e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.323466 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcnnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_openstack-operators(90ef3fb1-0f63-4fd0-94ef-2fefce011d23): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.342899 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb"] Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.348068 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcnnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_openstack-operators(90ef3fb1-0f63-4fd0-94ef-2fefce011d23): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.348106 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppl4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-h78bp_openstack-operators(64f35f86-9389-4506-ad53-d42eec926447): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.348074 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lstz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-57jrf_openstack-operators(fb4141a1-4768-45ae-a8e8-ec1d1c01db4e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.348260 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lj9xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5lzvr_openstack-operators(2e76f101-21a3-4b78-970e-55a016ee2a40): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.351962 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" podUID="2e76f101-21a3-4b78-970e-55a016ee2a40" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.352219 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podUID="90ef3fb1-0f63-4fd0-94ef-2fefce011d23" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.352457 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" podUID="fb4141a1-4768-45ae-a8e8-ec1d1c01db4e" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.353703 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppl4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-h78bp_openstack-operators(64f35f86-9389-4506-ad53-d42eec926447): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.355231 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" podUID="64f35f86-9389-4506-ad53-d42eec926447" Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.360946 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g"] Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.368980 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq"] Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.372706 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr"] Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.382587 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp"] Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.551479 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.551618 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.551673 4904 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.551744 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs podName:80a18488-07da-4b66-b164-40f7d7027b5b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:40.551722126 +0000 UTC m=+1174.673254678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs") pod "openstack-operator-controller-manager-79fb5496bb-zp56v" (UID: "80a18488-07da-4b66-b164-40f7d7027b5b") : secret "metrics-server-cert" not found Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.551804 4904 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.551841 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs podName:80a18488-07da-4b66-b164-40f7d7027b5b nodeName:}" failed. No retries permitted until 2025-11-21 13:51:40.551830358 +0000 UTC m=+1174.673362910 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs") pod "openstack-operator-controller-manager-79fb5496bb-zp56v" (UID: "80a18488-07da-4b66-b164-40f7d7027b5b") : secret "webhook-server-cert" not found Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.901235 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" event={"ID":"90ef3fb1-0f63-4fd0-94ef-2fefce011d23","Type":"ContainerStarted","Data":"53503fb2e697001596ac86cb666881672c581d468363f5636973738d1ae5a872"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.904254 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" event={"ID":"2e76f101-21a3-4b78-970e-55a016ee2a40","Type":"ContainerStarted","Data":"fe5c632b784901cfa0d34396453f8f6fc5ebe4acb52a59994f18432a4d4599e2"} Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.907440 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podUID="90ef3fb1-0f63-4fd0-94ef-2fefce011d23" Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.908050 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" podUID="2e76f101-21a3-4b78-970e-55a016ee2a40" Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.913231 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" event={"ID":"a8dc5688-31cd-412c-91e9-3ae137d2a20a","Type":"ContainerStarted","Data":"c852328ca8e624ffd41a8f5e2ca8317495a84d2d2f8232a764073c9d77af813b"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.917282 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" event={"ID":"72dacec1-d81b-46df-acd2-962095286389","Type":"ContainerStarted","Data":"a62541226c64ad1d08dc30e87dcaae876fd196b97f9f7ba62d9f45eaf9c1f4b6"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.920487 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" event={"ID":"c8300ff9-666c-4d85-bd43-120d41529215","Type":"ContainerStarted","Data":"15e5b4d08a3da6699b58563cee4f95307b763036463299fb53859cefa10ed8bb"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.933071 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" event={"ID":"fb4141a1-4768-45ae-a8e8-ec1d1c01db4e","Type":"ContainerStarted","Data":"041d4f596601467efc034652209a16d104248d7c9e5e74f3e4248a9c2c19b8ba"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.937818 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" event={"ID":"4dc68816-ed56-4e8b-a41b-b91868bc57d3","Type":"ContainerStarted","Data":"95e9f9ec997414d64d55dab28f9a1e5d50b2136a423839102a99466229838996"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.941557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" event={"ID":"887b5387-ce64-43b1-8755-2c401719a2d6","Type":"ContainerStarted","Data":"be923d111039053c25d5713ba963e154612382486a8b4186ff6bc0a04d6eb4f7"} Nov 21 13:51:38 crc kubenswrapper[4904]: E1121 13:51:38.943401 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" podUID="fb4141a1-4768-45ae-a8e8-ec1d1c01db4e" Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.950222 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" event={"ID":"301f4657-8519-4071-a82b-b35f80739372","Type":"ContainerStarted","Data":"e5cf55f45eddd68cad55ba6610c46ee6af9522944627bfcd63e7f48663e5c551"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.965680 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" event={"ID":"27625471-8f27-449b-a245-558e079a38ab","Type":"ContainerStarted","Data":"71a1c119ba80a517383026a5419f7a98d5d5408a73231924813a8d40c0c94364"} Nov 21 13:51:38 crc kubenswrapper[4904]: I1121 13:51:38.968228 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" event={"ID":"64f35f86-9389-4506-ad53-d42eec926447","Type":"ContainerStarted","Data":"f01953abd5986a6eff54d93b4b5d36d1ebe45e81def2f4214c54500b58430f18"} Nov 21 13:51:39 crc kubenswrapper[4904]: I1121 13:51:39.013265 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" event={"ID":"756ba318-ec48-4012-9d9d-108c3f1fad3c","Type":"ContainerStarted","Data":"e11b2277cd48572dcc6991e311b5ac019369de5378bf99d5509a659b23c46de6"} Nov 21 13:51:39 crc kubenswrapper[4904]: E1121 13:51:39.013330 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" podUID="64f35f86-9389-4506-ad53-d42eec926447" Nov 21 13:51:40 crc kubenswrapper[4904]: E1121 13:51:40.057498 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" podUID="2e76f101-21a3-4b78-970e-55a016ee2a40" Nov 21 13:51:40 crc kubenswrapper[4904]: E1121 13:51:40.058248 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" podUID="64f35f86-9389-4506-ad53-d42eec926447" Nov 21 13:51:40 crc kubenswrapper[4904]: E1121 13:51:40.058838 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podUID="90ef3fb1-0f63-4fd0-94ef-2fefce011d23" Nov 21 13:51:40 crc kubenswrapper[4904]: E1121 13:51:40.118684 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" podUID="fb4141a1-4768-45ae-a8e8-ec1d1c01db4e" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.312559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.321205 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t\" (UID: \"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.391721 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.620888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.628457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.632627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-metrics-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.633315 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80a18488-07da-4b66-b164-40f7d7027b5b-webhook-certs\") pod \"openstack-operator-controller-manager-79fb5496bb-zp56v\" (UID: \"80a18488-07da-4b66-b164-40f7d7027b5b\") " pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:40 crc kubenswrapper[4904]: I1121 13:51:40.858575 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:51:50 crc kubenswrapper[4904]: E1121 13:51:50.118575 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9" Nov 21 13:51:50 crc kubenswrapper[4904]: E1121 13:51:50.119620 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:553b1288b330ad05771d59c6b73c1681c95f457e8475682f9ad0d2e6b85f37e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kspnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-79856dc55c-hm8cc_openstack-operators(4b290147-91ef-4734-961b-b61487960c33): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:51 crc kubenswrapper[4904]: E1121 13:51:51.456728 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 21 13:51:51 crc kubenswrapper[4904]: E1121 13:51:51.457489 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-glq6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-vtkhq_openstack-operators(a8dc5688-31cd-412c-91e9-3ae137d2a20a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:51 crc kubenswrapper[4904]: E1121 13:51:51.740618 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 21 13:51:51 crc kubenswrapper[4904]: E1121 13:51:51.740955 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gtlk4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-qmmmc_openstack-operators(887b5387-ce64-43b1-8755-2c401719a2d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:52 crc kubenswrapper[4904]: E1121 13:51:52.322888 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991" Nov 21 13:51:52 crc kubenswrapper[4904]: E1121 13:51:52.323238 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5fvhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68b95954c9-bnt22_openstack-operators(b91d5a1c-a2d5-4875-a23a-e43ae7f18937): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:53 crc kubenswrapper[4904]: E1121 13:51:53.138766 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 21 13:51:53 crc kubenswrapper[4904]: E1121 13:51:53.139088 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wxwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c9694994-jp6bp_openstack-operators(d81ae352-08d2-433c-b883-deeb78888945): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:53 crc kubenswrapper[4904]: E1121 13:51:53.643762 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 21 13:51:53 crc kubenswrapper[4904]: E1121 13:51:53.644027 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8fqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-6bxqf_openstack-operators(c8300ff9-666c-4d85-bd43-120d41529215): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:54 crc kubenswrapper[4904]: E1121 13:51:54.099798 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 21 13:51:54 crc kubenswrapper[4904]: E1121 13:51:54.100200 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n2ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-lrfmt_openstack-operators(27625471-8f27-449b-a245-558e079a38ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:54 crc kubenswrapper[4904]: E1121 13:51:54.636533 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 21 13:51:54 crc kubenswrapper[4904]: E1121 13:51:54.636870 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pcrxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-xr5mv_openstack-operators(98bcfebc-c45f-4a2e-a21f-9c8cf892898c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:51:55 crc kubenswrapper[4904]: E1121 13:51:55.061000 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 21 13:51:55 crc kubenswrapper[4904]: E1121 13:51:55.061243 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vcdll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-nkqwb_openstack-operators(37bdccc0-c16d-4523-94d7-978d8313ca7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:52:03 crc kubenswrapper[4904]: I1121 13:52:03.469464 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v"] Nov 21 13:52:03 crc kubenswrapper[4904]: I1121 13:52:03.527313 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t"] Nov 21 13:52:05 crc kubenswrapper[4904]: I1121 13:52:05.261068 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" event={"ID":"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd","Type":"ContainerStarted","Data":"57f5f146ca746c51e5073795d99340f481f78a5d0e40e90b10022a4b34a65f4f"} Nov 21 13:52:05 crc kubenswrapper[4904]: I1121 13:52:05.262282 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" event={"ID":"80a18488-07da-4b66-b164-40f7d7027b5b","Type":"ContainerStarted","Data":"894dd1c4833bfe9dd9a62aec4e0df02f1d3f8fda03761e90d7735cc2bb6e195e"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.305085 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" event={"ID":"4dc68816-ed56-4e8b-a41b-b91868bc57d3","Type":"ContainerStarted","Data":"f167dd1da7596e8705a0c0e02e1693b6a4710e172d85f1ed7c109a1e08b4c5fb"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.325891 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" event={"ID":"55b86375-94e3-4c12-96b9-c5f581b3d8f3","Type":"ContainerStarted","Data":"fe49482d9faf1131e6e90bb35a24d274c8ca5e125914c893e82a40b94678213e"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.345946 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" event={"ID":"301f4657-8519-4071-a82b-b35f80739372","Type":"ContainerStarted","Data":"0d8f06d16c7fc52d37c6bc261569c39a3f9ed98555878acff815bda7064eebdf"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.361399 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" event={"ID":"50f3313d-ff99-4b0e-931a-c2a774375ae3","Type":"ContainerStarted","Data":"3b1c48b22ac9799f94981a0f0a70309513029b4e44cdeb31f7ee8e1453bf1e78"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.370956 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" event={"ID":"80a18488-07da-4b66-b164-40f7d7027b5b","Type":"ContainerStarted","Data":"12e7ace6fc9ed5ba2f008a3d6cf9f4777772a49eab1f0270ec65a45fd473fd9d"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.372373 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.396934 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" event={"ID":"756ba318-ec48-4012-9d9d-108c3f1fad3c","Type":"ContainerStarted","Data":"c64324f34fc177a06ffb0da65cf2d27d85e69a1300a3b6a03b0a0f20d5c7b2ff"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.407354 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" event={"ID":"72dacec1-d81b-46df-acd2-962095286389","Type":"ContainerStarted","Data":"d8b53372ddfa94427c3da0708f50ba9b40926c62b79d97ea5c62e07e569c9752"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.410472 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" event={"ID":"64f35f86-9389-4506-ad53-d42eec926447","Type":"ContainerStarted","Data":"7f9426c31654be298847d93e88e2757286868144f5400027b1b149383b581b5c"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.434034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" event={"ID":"15e30337-bd79-4d01-b3ab-2177d3c0609b","Type":"ContainerStarted","Data":"e9930cd640088526587df543546b901fa713f32f6401986e2a52f74075cb9d44"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.455023 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" event={"ID":"2e76f101-21a3-4b78-970e-55a016ee2a40","Type":"ContainerStarted","Data":"cc32c6d931db458d36910ce23eaf733698a90296f0e7e6f776e52970e5d70b8f"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.483456 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" event={"ID":"14e3fbea-6dc2-44a4-81be-dfda27a6cdd8","Type":"ContainerStarted","Data":"e1bb9a2f1c2f223cca65a8b5475b9f66e5e52829f0d5597970784ce8cec248a4"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.485670 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" event={"ID":"fb4141a1-4768-45ae-a8e8-ec1d1c01db4e","Type":"ContainerStarted","Data":"ceb7ee00e3d1855b5ad6c17ae5a80d034db85fc6b1ba0c6544abd7d50c009e81"} Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.520412 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" podStartSLOduration=31.520391573 podStartE2EDuration="31.520391573s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:52:07.489898289 +0000 UTC m=+1201.611430831" watchObservedRunningTime="2025-11-21 13:52:07.520391573 +0000 UTC m=+1201.641924125" Nov 21 13:52:07 crc kubenswrapper[4904]: I1121 13:52:07.527446 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5lzvr" podStartSLOduration=4.143541295 podStartE2EDuration="31.527426906s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.347999319 +0000 UTC m=+1172.469531861" lastFinishedPulling="2025-11-21 13:52:05.73188492 +0000 UTC m=+1199.853417472" observedRunningTime="2025-11-21 13:52:07.522994404 +0000 UTC m=+1201.644526956" watchObservedRunningTime="2025-11-21 13:52:07.527426906 +0000 UTC m=+1201.648959448" Nov 21 13:52:08 crc kubenswrapper[4904]: E1121 13:52:08.086254 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c" Nov 21 13:52:08 crc kubenswrapper[4904]: E1121 13:52:08.086328 4904 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c" Nov 21 13:52:08 crc kubenswrapper[4904]: E1121 13:52:08.086516 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wcnnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_openstack-operators(90ef3fb1-0f63-4fd0-94ef-2fefce011d23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:52:10 crc kubenswrapper[4904]: I1121 13:52:10.530764 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" event={"ID":"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd","Type":"ContainerStarted","Data":"7294ab7144c55c76541ce3b1825b738a93515c7d8cfd689bc39a114c19b87c6e"} Nov 21 13:52:10 crc kubenswrapper[4904]: E1121 13:52:10.898824 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" podUID="4b290147-91ef-4734-961b-b61487960c33" Nov 21 13:52:10 crc kubenswrapper[4904]: E1121 13:52:10.914715 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" podUID="c8300ff9-666c-4d85-bd43-120d41529215" Nov 21 13:52:10 crc kubenswrapper[4904]: E1121 13:52:10.928010 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" podUID="27625471-8f27-449b-a245-558e079a38ab" Nov 21 13:52:10 crc kubenswrapper[4904]: E1121 13:52:10.977900 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" podUID="d81ae352-08d2-433c-b883-deeb78888945" Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.013178 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podUID="90ef3fb1-0f63-4fd0-94ef-2fefce011d23" Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.230930 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" podUID="98bcfebc-c45f-4a2e-a21f-9c8cf892898c" Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.234779 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" podUID="887b5387-ce64-43b1-8755-2c401719a2d6" Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.245742 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" podUID="a8dc5688-31cd-412c-91e9-3ae137d2a20a" Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.330347 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" podUID="b91d5a1c-a2d5-4875-a23a-e43ae7f18937" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.534602 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" event={"ID":"887b5387-ce64-43b1-8755-2c401719a2d6","Type":"ContainerStarted","Data":"b56bf49757e8eade383de3a00c1f4757eb1063991ad6950f08237c21f1d6562e"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.541278 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" event={"ID":"b91d5a1c-a2d5-4875-a23a-e43ae7f18937","Type":"ContainerStarted","Data":"c7722ff20110daa04143b489e9124f556c1c5ab7163a702a77a8b280039938c8"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.545564 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" event={"ID":"756ba318-ec48-4012-9d9d-108c3f1fad3c","Type":"ContainerStarted","Data":"1c3a89c4339f7e59d1d21d10e95b1997356a425172fd19ac64cd81320617a9a0"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.546505 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.555790 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" event={"ID":"15e30337-bd79-4d01-b3ab-2177d3c0609b","Type":"ContainerStarted","Data":"cfe698b0eb93ea9646ae812f4a89947d4d88ffe0ae6e137fe00cdfde278b2bda"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.556755 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.559934 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.561898 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" event={"ID":"d81ae352-08d2-433c-b883-deeb78888945","Type":"ContainerStarted","Data":"3af980b5735e24eebe1d9c589760b50cb6a33b48a586a6e21a9443ad306d064f"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.569098 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.570756 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" podUID="37bdccc0-c16d-4523-94d7-978d8313ca7f" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.583459 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" event={"ID":"50f3313d-ff99-4b0e-931a-c2a774375ae3","Type":"ContainerStarted","Data":"2fbf78fed06af0089a9853e2b01b01eefb42aa5f09e70249f4eb89e5575fb56a"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.585145 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.597746 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.598557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" event={"ID":"4b290147-91ef-4734-961b-b61487960c33","Type":"ContainerStarted","Data":"5815326d503617f5e2eef73169b3c4762da9b0bd3c313b34d9536c128ae03668"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.600196 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-kpwdb" podStartSLOduration=3.232143147 podStartE2EDuration="35.600169949s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.30219442 +0000 UTC m=+1172.423726982" lastFinishedPulling="2025-11-21 13:52:10.670221232 +0000 UTC m=+1204.791753784" observedRunningTime="2025-11-21 13:52:11.598320057 +0000 UTC m=+1205.719852609" watchObservedRunningTime="2025-11-21 13:52:11.600169949 +0000 UTC m=+1205.721702511" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.618972 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" event={"ID":"a8dc5688-31cd-412c-91e9-3ae137d2a20a","Type":"ContainerStarted","Data":"2ab9725863897d85df2a91ef34a234a30c28dd0b9c826b35ee79c857e5135ec7"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.630699 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-wvfgt" podStartSLOduration=3.7488200210000002 podStartE2EDuration="36.630667094s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.444575015 +0000 UTC m=+1171.566107567" lastFinishedPulling="2025-11-21 13:52:10.326422088 +0000 UTC m=+1204.447954640" observedRunningTime="2025-11-21 13:52:11.626472857 +0000 UTC m=+1205.748005409" watchObservedRunningTime="2025-11-21 13:52:11.630667094 +0000 UTC m=+1205.752199646" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.671076 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" event={"ID":"301f4657-8519-4071-a82b-b35f80739372","Type":"ContainerStarted","Data":"49e3abd0e1fd50f57ccef9ef2e25519326cffcec2a8f1ebcd85b5de42ea36352"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.672159 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.676386 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.691824 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" event={"ID":"14e3fbea-6dc2-44a4-81be-dfda27a6cdd8","Type":"ContainerStarted","Data":"b7ab62d620269b716eaedbbd2d6d605cd7138eca3c055d6ed1e2b02e1a89b1be"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.692360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.707391 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.715287 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" event={"ID":"ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd","Type":"ContainerStarted","Data":"b7e878a9dd9338d25ec0834a8e0414a7eeee8a54d25ededc06401ee081b87ac6"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.715664 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.774613 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" event={"ID":"27625471-8f27-449b-a245-558e079a38ab","Type":"ContainerStarted","Data":"12d9957af489dffc79c8ca1f84c22f1f8116bb4d8a1f2d708ba0c1a1b41ed58d"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.867251 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" event={"ID":"98bcfebc-c45f-4a2e-a21f-9c8cf892898c","Type":"ContainerStarted","Data":"f745a6a8e4b026ec6e109f96a9281912b6ece0562a0be2f4da9a4b36b81bd98b"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.872874 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-j9dcf" podStartSLOduration=3.096474367 podStartE2EDuration="36.87285242s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:36.691003493 +0000 UTC m=+1170.812536045" lastFinishedPulling="2025-11-21 13:52:10.467381546 +0000 UTC m=+1204.588914098" observedRunningTime="2025-11-21 13:52:11.861879116 +0000 UTC m=+1205.983411668" watchObservedRunningTime="2025-11-21 13:52:11.87285242 +0000 UTC m=+1205.994384972" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.882433 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" event={"ID":"c8300ff9-666c-4d85-bd43-120d41529215","Type":"ContainerStarted","Data":"6d8f06af8d7da805821dc0c8859058c17711b1e1008816955e6da3289d777808"} Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.909004 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" event={"ID":"90ef3fb1-0f63-4fd0-94ef-2fefce011d23","Type":"ContainerStarted","Data":"fb24cb6f0ed0ee8b19f1492ce475965c5af0ff7e7545d8b22499cc0d65b253f0"} Nov 21 13:52:11 crc kubenswrapper[4904]: E1121 13:52:11.917256 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podUID="90ef3fb1-0f63-4fd0-94ef-2fefce011d23" Nov 21 13:52:11 crc kubenswrapper[4904]: I1121 13:52:11.999907 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" podStartSLOduration=32.469106428 podStartE2EDuration="36.999885384s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:52:04.824549835 +0000 UTC m=+1198.946082387" lastFinishedPulling="2025-11-21 13:52:09.355328791 +0000 UTC m=+1203.476861343" observedRunningTime="2025-11-21 13:52:11.990225462 +0000 UTC m=+1206.111758014" watchObservedRunningTime="2025-11-21 13:52:11.999885384 +0000 UTC m=+1206.121417936" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.030179 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-b8xrh" podStartSLOduration=4.406577638 podStartE2EDuration="37.030150344s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.863522215 +0000 UTC m=+1171.985054767" lastFinishedPulling="2025-11-21 13:52:10.487094921 +0000 UTC m=+1204.608627473" observedRunningTime="2025-11-21 13:52:12.022784304 +0000 UTC m=+1206.144316866" watchObservedRunningTime="2025-11-21 13:52:12.030150344 +0000 UTC m=+1206.151682896" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.157413 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-lnp8w" podStartSLOduration=4.23272196 podStartE2EDuration="37.157389054s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.368873975 +0000 UTC m=+1171.490406527" lastFinishedPulling="2025-11-21 13:52:10.293541069 +0000 UTC m=+1204.415073621" observedRunningTime="2025-11-21 13:52:12.1533086 +0000 UTC m=+1206.274841152" watchObservedRunningTime="2025-11-21 13:52:12.157389054 +0000 UTC m=+1206.278921626" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.919607 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" event={"ID":"55b86375-94e3-4c12-96b9-c5f581b3d8f3","Type":"ContainerStarted","Data":"4971fba43920a41272ba02fda2bbdbdcdf3500d9acb881e21b293546d9d2057e"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.921935 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.924103 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.924886 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" event={"ID":"887b5387-ce64-43b1-8755-2c401719a2d6","Type":"ContainerStarted","Data":"fe614a4c63ef162252c0a212cc967bc80f56fdf4722ffade6164e0bcefed666c"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.924974 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.927916 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" event={"ID":"b91d5a1c-a2d5-4875-a23a-e43ae7f18937","Type":"ContainerStarted","Data":"d3a251e8d6df7004104633bb4ac3be5a759fa8250bf07b14dc0ea8351d996a87"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.927997 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.930213 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" event={"ID":"64f35f86-9389-4506-ad53-d42eec926447","Type":"ContainerStarted","Data":"bc4b9b10cc01be3f35b98e1aa52ebbc7e6687a46a37cb4b119680b027491b521"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.930423 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.932233 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" event={"ID":"a8dc5688-31cd-412c-91e9-3ae137d2a20a","Type":"ContainerStarted","Data":"3939393c3fbc6ebf5c15af5aec123af69d6095d410e8a91cbc2785bb9d43864c"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.932371 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.934368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" event={"ID":"fb4141a1-4768-45ae-a8e8-ec1d1c01db4e","Type":"ContainerStarted","Data":"2b0f2d7e99d921e6d2f1cd2437d7e9ac909ef8af60fca49b86b225a206d727d2"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.934604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.936951 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" event={"ID":"d81ae352-08d2-433c-b883-deeb78888945","Type":"ContainerStarted","Data":"d31cbbce7abf8f54c43b3db98e65efd8e1e40d76f9785d9e64587f90292f7efb"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.941208 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.941277 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.941363 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.946865 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-ncvqt" podStartSLOduration=3.739957565 podStartE2EDuration="37.946839245s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.005891308 +0000 UTC m=+1171.127423860" lastFinishedPulling="2025-11-21 13:52:11.212772988 +0000 UTC m=+1205.334305540" observedRunningTime="2025-11-21 13:52:12.941579793 +0000 UTC m=+1207.063112335" watchObservedRunningTime="2025-11-21 13:52:12.946839245 +0000 UTC m=+1207.068371807" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.949919 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" event={"ID":"72dacec1-d81b-46df-acd2-962095286389","Type":"ContainerStarted","Data":"095a7ba2ff0b9c4a73d8cf1e2acdbdcf7cd37310002f848f70e2ca204fe8f05a"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.950192 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.954015 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.955906 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" event={"ID":"4dc68816-ed56-4e8b-a41b-b91868bc57d3","Type":"ContainerStarted","Data":"e6b047cdd67a1c56407fe43834c646961dfafe54faa853a215c39664a00e865d"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.956195 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.958824 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.961884 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" event={"ID":"37bdccc0-c16d-4523-94d7-978d8313ca7f","Type":"ContainerStarted","Data":"971c369adbbb55f812862df65d4f0c130c5855fcadda7cf8647a85cb1b2f525f"} Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.968107 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" podStartSLOduration=3.7882169 podStartE2EDuration="37.968085216s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.919939578 +0000 UTC m=+1172.041472130" lastFinishedPulling="2025-11-21 13:52:12.099807894 +0000 UTC m=+1206.221340446" observedRunningTime="2025-11-21 13:52:12.966467389 +0000 UTC m=+1207.087999951" watchObservedRunningTime="2025-11-21 13:52:12.968085216 +0000 UTC m=+1207.089617768" Nov 21 13:52:12 crc kubenswrapper[4904]: I1121 13:52:12.988266 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" podStartSLOduration=3.327742861 podStartE2EDuration="37.988247911s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.355765053 +0000 UTC m=+1171.477297605" lastFinishedPulling="2025-11-21 13:52:12.016270103 +0000 UTC m=+1206.137802655" observedRunningTime="2025-11-21 13:52:12.985009627 +0000 UTC m=+1207.106542179" watchObservedRunningTime="2025-11-21 13:52:12.988247911 +0000 UTC m=+1207.109780463" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.013598 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-57jrf" podStartSLOduration=4.550912368 podStartE2EDuration="37.013564056s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.32296259 +0000 UTC m=+1172.444495142" lastFinishedPulling="2025-11-21 13:52:10.785614278 +0000 UTC m=+1204.907146830" observedRunningTime="2025-11-21 13:52:13.008324426 +0000 UTC m=+1207.129856978" watchObservedRunningTime="2025-11-21 13:52:13.013564056 +0000 UTC m=+1207.135096608" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.073828 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" podStartSLOduration=3.097883415 podStartE2EDuration="37.073795558s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.268010921 +0000 UTC m=+1172.389543473" lastFinishedPulling="2025-11-21 13:52:12.243923064 +0000 UTC m=+1206.365455616" observedRunningTime="2025-11-21 13:52:13.038288368 +0000 UTC m=+1207.159820920" watchObservedRunningTime="2025-11-21 13:52:13.073795558 +0000 UTC m=+1207.195328120" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.099082 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" podStartSLOduration=3.368133544 podStartE2EDuration="38.099052632s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.367668578 +0000 UTC m=+1171.489201130" lastFinishedPulling="2025-11-21 13:52:12.098587666 +0000 UTC m=+1206.220120218" observedRunningTime="2025-11-21 13:52:13.081474556 +0000 UTC m=+1207.203007108" watchObservedRunningTime="2025-11-21 13:52:13.099052632 +0000 UTC m=+1207.220585194" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.129831 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-h78bp" podStartSLOduration=5.552538076 podStartE2EDuration="38.129799142s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.347966098 +0000 UTC m=+1172.469498650" lastFinishedPulling="2025-11-21 13:52:10.925227164 +0000 UTC m=+1205.046759716" observedRunningTime="2025-11-21 13:52:13.124246604 +0000 UTC m=+1207.245779156" watchObservedRunningTime="2025-11-21 13:52:13.129799142 +0000 UTC m=+1207.251331694" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.165820 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-tg85w" podStartSLOduration=4.920482932 podStartE2EDuration="38.165787784s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.993253812 +0000 UTC m=+1172.114786364" lastFinishedPulling="2025-11-21 13:52:11.238558664 +0000 UTC m=+1205.360091216" observedRunningTime="2025-11-21 13:52:13.154859861 +0000 UTC m=+1207.276392423" watchObservedRunningTime="2025-11-21 13:52:13.165787784 +0000 UTC m=+1207.287320356" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.238081 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-n2t2q" podStartSLOduration=5.0588423 podStartE2EDuration="38.238056114s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.866855122 +0000 UTC m=+1171.988387674" lastFinishedPulling="2025-11-21 13:52:11.046068936 +0000 UTC m=+1205.167601488" observedRunningTime="2025-11-21 13:52:13.194776244 +0000 UTC m=+1207.316308816" watchObservedRunningTime="2025-11-21 13:52:13.238056114 +0000 UTC m=+1207.359588656" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.971051 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" event={"ID":"27625471-8f27-449b-a245-558e079a38ab","Type":"ContainerStarted","Data":"de72e48d56e4621f7a7eae1b19ee929305f45455e920bd503a1da564dd72a713"} Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.971698 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.974057 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" event={"ID":"37bdccc0-c16d-4523-94d7-978d8313ca7f","Type":"ContainerStarted","Data":"cff15bd5f3a093d7e4feac27f385d3295b6523a998fe46fdcee11a5edc443af2"} Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.974228 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.976693 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" event={"ID":"4b290147-91ef-4734-961b-b61487960c33","Type":"ContainerStarted","Data":"00abf7ba3fbca3321a88a83f566ffed49d805f331ff5905330c392446dcdadda"} Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.977137 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.978494 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" event={"ID":"98bcfebc-c45f-4a2e-a21f-9c8cf892898c","Type":"ContainerStarted","Data":"be76688ca251bf86f70a4ae3e9a9fd7cfb718edd776c23cebcb782f390f23931"} Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.978612 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.980835 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" event={"ID":"c8300ff9-666c-4d85-bd43-120d41529215","Type":"ContainerStarted","Data":"5099c66f977114d0461f10de6f5f912163f75085068338d86e9f5f1c68a678fe"} Nov 21 13:52:13 crc kubenswrapper[4904]: I1121 13:52:13.993828 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" podStartSLOduration=3.798596006 podStartE2EDuration="37.993803136s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.255883961 +0000 UTC m=+1172.377416513" lastFinishedPulling="2025-11-21 13:52:12.451091091 +0000 UTC m=+1206.572623643" observedRunningTime="2025-11-21 13:52:13.989913806 +0000 UTC m=+1208.111446358" watchObservedRunningTime="2025-11-21 13:52:13.993803136 +0000 UTC m=+1208.115335688" Nov 21 13:52:14 crc kubenswrapper[4904]: I1121 13:52:14.018240 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" podStartSLOduration=3.988680843 podStartE2EDuration="39.0182193s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.428175276 +0000 UTC m=+1171.549707818" lastFinishedPulling="2025-11-21 13:52:12.457713723 +0000 UTC m=+1206.579246275" observedRunningTime="2025-11-21 13:52:14.009728724 +0000 UTC m=+1208.131261296" watchObservedRunningTime="2025-11-21 13:52:14.0182193 +0000 UTC m=+1208.139751852" Nov 21 13:52:14 crc kubenswrapper[4904]: I1121 13:52:14.032757 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" podStartSLOduration=3.55231671 podStartE2EDuration="39.032733225s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:36.969551679 +0000 UTC m=+1171.091084231" lastFinishedPulling="2025-11-21 13:52:12.449968184 +0000 UTC m=+1206.571500746" observedRunningTime="2025-11-21 13:52:14.028029336 +0000 UTC m=+1208.149561898" watchObservedRunningTime="2025-11-21 13:52:14.032733225 +0000 UTC m=+1208.154265777" Nov 21 13:52:14 crc kubenswrapper[4904]: I1121 13:52:14.056359 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" podStartSLOduration=2.795821261 podStartE2EDuration="39.05633208s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.388641113 +0000 UTC m=+1171.510173675" lastFinishedPulling="2025-11-21 13:52:13.649151942 +0000 UTC m=+1207.770684494" observedRunningTime="2025-11-21 13:52:14.047063327 +0000 UTC m=+1208.168595879" watchObservedRunningTime="2025-11-21 13:52:14.05633208 +0000 UTC m=+1208.177864642" Nov 21 13:52:14 crc kubenswrapper[4904]: I1121 13:52:14.066503 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" podStartSLOduration=4.44826184 podStartE2EDuration="39.066479374s" podCreationTimestamp="2025-11-21 13:51:35 +0000 UTC" firstStartedPulling="2025-11-21 13:51:37.899793063 +0000 UTC m=+1172.021325615" lastFinishedPulling="2025-11-21 13:52:12.518010607 +0000 UTC m=+1206.639543149" observedRunningTime="2025-11-21 13:52:14.063741432 +0000 UTC m=+1208.185274004" watchObservedRunningTime="2025-11-21 13:52:14.066479374 +0000 UTC m=+1208.188011926" Nov 21 13:52:14 crc kubenswrapper[4904]: I1121 13:52:14.990068 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:52:17 crc kubenswrapper[4904]: I1121 13:52:17.067866 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-vtkhq" Nov 21 13:52:20 crc kubenswrapper[4904]: I1121 13:52:20.401147 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t" Nov 21 13:52:20 crc kubenswrapper[4904]: I1121 13:52:20.867517 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-79fb5496bb-zp56v" Nov 21 13:52:24 crc kubenswrapper[4904]: E1121 13:52:24.516163 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.9:5001/openstack-k8s-operators/telemetry-operator:0311e5290726db3224383a9f7daf7d0c56839e0c\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podUID="90ef3fb1-0f63-4fd0-94ef-2fefce011d23" Nov 21 13:52:25 crc kubenswrapper[4904]: I1121 13:52:25.774329 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" Nov 21 13:52:25 crc kubenswrapper[4904]: I1121 13:52:25.845267 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-bnt22" Nov 21 13:52:26 crc kubenswrapper[4904]: I1121 13:52:26.086642 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-jp6bp" Nov 21 13:52:26 crc kubenswrapper[4904]: I1121 13:52:26.168152 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-nkqwb" Nov 21 13:52:26 crc kubenswrapper[4904]: I1121 13:52:26.450580 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-xr5mv" Nov 21 13:52:26 crc kubenswrapper[4904]: I1121 13:52:26.746894 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-6bxqf" Nov 21 13:52:26 crc kubenswrapper[4904]: I1121 13:52:26.775187 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-qmmmc" Nov 21 13:52:26 crc kubenswrapper[4904]: I1121 13:52:26.818010 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-lrfmt" Nov 21 13:52:39 crc kubenswrapper[4904]: I1121 13:52:39.188747 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" event={"ID":"90ef3fb1-0f63-4fd0-94ef-2fefce011d23","Type":"ContainerStarted","Data":"e07a3cfbb45a1ae5cd6fa6859b3e49f9e3348a1ff2e2d11832436bf281cf61d3"} Nov 21 13:52:39 crc kubenswrapper[4904]: I1121 13:52:39.189429 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:52:39 crc kubenswrapper[4904]: I1121 13:52:39.212992 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" podStartSLOduration=2.958464353 podStartE2EDuration="1m3.212970609s" podCreationTimestamp="2025-11-21 13:51:36 +0000 UTC" firstStartedPulling="2025-11-21 13:51:38.323220826 +0000 UTC m=+1172.444753378" lastFinishedPulling="2025-11-21 13:52:38.577727082 +0000 UTC m=+1232.699259634" observedRunningTime="2025-11-21 13:52:39.209294624 +0000 UTC m=+1233.330827196" watchObservedRunningTime="2025-11-21 13:52:39.212970609 +0000 UTC m=+1233.334503161" Nov 21 13:52:46 crc kubenswrapper[4904]: I1121 13:52:46.861110 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7fc59d4bfd-vzk7g" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.654826 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2725"] Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.657245 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.660309 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.660510 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.660667 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.660796 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jh268" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.664324 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2725"] Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.752226 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9ckps"] Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.753987 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.760820 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.767219 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48mt\" (UniqueName: \"kubernetes.io/projected/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-kube-api-access-h48mt\") pod \"dnsmasq-dns-675f4bcbfc-f2725\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.767433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-config\") pod \"dnsmasq-dns-675f4bcbfc-f2725\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.778784 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9ckps"] Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.869099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-config\") pod \"dnsmasq-dns-675f4bcbfc-f2725\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.869220 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48mt\" (UniqueName: \"kubernetes.io/projected/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-kube-api-access-h48mt\") pod \"dnsmasq-dns-675f4bcbfc-f2725\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.869301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.869341 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rff98\" (UniqueName: \"kubernetes.io/projected/e4259de2-ba62-4476-8a7f-f143b38daab8-kube-api-access-rff98\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.869459 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-config\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.871096 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-config\") pod \"dnsmasq-dns-675f4bcbfc-f2725\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.901983 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48mt\" (UniqueName: \"kubernetes.io/projected/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-kube-api-access-h48mt\") pod \"dnsmasq-dns-675f4bcbfc-f2725\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.972256 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.972351 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rff98\" (UniqueName: \"kubernetes.io/projected/e4259de2-ba62-4476-8a7f-f143b38daab8-kube-api-access-rff98\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.972391 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-config\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.973706 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-config\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.974348 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:03 crc kubenswrapper[4904]: I1121 13:53:03.986399 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:04 crc kubenswrapper[4904]: I1121 13:53:04.010913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rff98\" (UniqueName: \"kubernetes.io/projected/e4259de2-ba62-4476-8a7f-f143b38daab8-kube-api-access-rff98\") pod \"dnsmasq-dns-78dd6ddcc-9ckps\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:04 crc kubenswrapper[4904]: I1121 13:53:04.078862 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:04 crc kubenswrapper[4904]: I1121 13:53:04.441138 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9ckps"] Nov 21 13:53:04 crc kubenswrapper[4904]: I1121 13:53:04.563295 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2725"] Nov 21 13:53:04 crc kubenswrapper[4904]: W1121 13:53:04.563699 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fdbed90_b15e_45d4_97f3_a76cd9a36d35.slice/crio-a70f8396f4831fd5d07e23e480034b6dd2272bccc54e95cafbda54f774b514ea WatchSource:0}: Error finding container a70f8396f4831fd5d07e23e480034b6dd2272bccc54e95cafbda54f774b514ea: Status 404 returned error can't find the container with id a70f8396f4831fd5d07e23e480034b6dd2272bccc54e95cafbda54f774b514ea Nov 21 13:53:05 crc kubenswrapper[4904]: I1121 13:53:05.453693 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" event={"ID":"e4259de2-ba62-4476-8a7f-f143b38daab8","Type":"ContainerStarted","Data":"44ed9b5fe7cac777a8b63ab9f073073fbb986593d8313106fca312b3d610726f"} Nov 21 13:53:05 crc kubenswrapper[4904]: I1121 13:53:05.457080 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" event={"ID":"8fdbed90-b15e-45d4-97f3-a76cd9a36d35","Type":"ContainerStarted","Data":"a70f8396f4831fd5d07e23e480034b6dd2272bccc54e95cafbda54f774b514ea"} Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.787821 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2725"] Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.862722 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdlcj"] Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.864534 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.871782 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdlcj"] Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.937570 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-config\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.937635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpcb\" (UniqueName: \"kubernetes.io/projected/c4099b60-5765-4105-b20b-3308b43df36e-kube-api-access-9gpcb\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:06 crc kubenswrapper[4904]: I1121 13:53:06.937738 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.041883 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.041984 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-config\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.042025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpcb\" (UniqueName: \"kubernetes.io/projected/c4099b60-5765-4105-b20b-3308b43df36e-kube-api-access-9gpcb\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.044079 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-config\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.049721 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.106130 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpcb\" (UniqueName: \"kubernetes.io/projected/c4099b60-5765-4105-b20b-3308b43df36e-kube-api-access-9gpcb\") pod \"dnsmasq-dns-666b6646f7-xdlcj\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.174600 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9ckps"] Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.208215 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.216318 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c5qt7"] Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.219339 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.257893 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c5qt7"] Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.348301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/59898c2d-078d-48ba-8dfa-81604e47ce24-kube-api-access-7vkg6\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.348798 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.348835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-config\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.450987 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/59898c2d-078d-48ba-8dfa-81604e47ce24-kube-api-access-7vkg6\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.451065 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.451106 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-config\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.452083 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-config\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.452178 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.477167 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/59898c2d-078d-48ba-8dfa-81604e47ce24-kube-api-access-7vkg6\") pod \"dnsmasq-dns-57d769cc4f-c5qt7\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:07 crc kubenswrapper[4904]: I1121 13:53:07.552261 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.003294 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdlcj"] Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.017966 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.020100 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.022525 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.026756 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.027166 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.027325 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pfqw8" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.028808 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.028986 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.029192 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.029054 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.178039 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f72ba976-6eb5-4886-81fc-3f7e4563039d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.178119 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.178155 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpxt\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-kube-api-access-jqpxt\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.178251 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179438 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179466 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179495 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f72ba976-6eb5-4886-81fc-3f7e4563039d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179624 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-config-data\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.179668 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.191628 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c5qt7"] Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.282886 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284291 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284344 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284393 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284439 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f72ba976-6eb5-4886-81fc-3f7e4563039d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-config-data\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284614 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284639 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.284833 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f72ba976-6eb5-4886-81fc-3f7e4563039d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.285128 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.285224 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.285253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqpxt\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-kube-api-access-jqpxt\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.285981 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.286545 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.287133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.290945 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.299549 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.300803 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f72ba976-6eb5-4886-81fc-3f7e4563039d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.316042 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.319224 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqpxt\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-kube-api-access-jqpxt\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.319531 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f72ba976-6eb5-4886-81fc-3f7e4563039d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.323754 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-config-data\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.362911 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.399052 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.400910 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.411824 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.412132 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.412310 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.412477 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.412912 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.413082 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.413219 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m98jh" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.426976 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493008 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914527f1-8202-44fc-bbbb-4c39cf793a7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493107 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgmp\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-kube-api-access-pqgmp\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493217 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493493 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493581 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493609 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914527f1-8202-44fc-bbbb-4c39cf793a7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493686 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.493804 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.494098 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.498875 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" event={"ID":"59898c2d-078d-48ba-8dfa-81604e47ce24","Type":"ContainerStarted","Data":"a8c3765341e796da339c0bc73592ae82473d0173c074a789dfda8b636f3a1067"} Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.510302 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" event={"ID":"c4099b60-5765-4105-b20b-3308b43df36e","Type":"ContainerStarted","Data":"ccd48cf0274dee650a08d799bea2ad700c1f54e67766f4a2a565d0d399882ca8"} Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.596508 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914527f1-8202-44fc-bbbb-4c39cf793a7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.596583 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.596683 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.596758 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.596791 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914527f1-8202-44fc-bbbb-4c39cf793a7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.596822 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.597107 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqgmp\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-kube-api-access-pqgmp\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.597134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.597202 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.597221 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.597242 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.599680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.600241 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.600523 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.601502 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.601881 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.610361 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914527f1-8202-44fc-bbbb-4c39cf793a7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.611092 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.619835 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914527f1-8202-44fc-bbbb-4c39cf793a7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.620947 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.621160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.628317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqgmp\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-kube-api-access-pqgmp\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.652822 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.660475 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 13:53:08 crc kubenswrapper[4904]: I1121 13:53:08.737787 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.428607 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.538711 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f72ba976-6eb5-4886-81fc-3f7e4563039d","Type":"ContainerStarted","Data":"7cbc9e21e77fdc0e9f00cc22cc9dc0f06db02b3787df31a9747e98b7a9834097"} Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.575552 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.606510 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.618989 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.637356 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.639148 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.639489 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.639800 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2n689" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.641247 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.641601 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.731940 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/554709ef-9d18-4a19-aded-0c8fe94e30e8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.731988 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/554709ef-9d18-4a19-aded-0c8fe94e30e8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.732037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-config-data-default\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.732081 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.732139 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh24q\" (UniqueName: \"kubernetes.io/projected/554709ef-9d18-4a19-aded-0c8fe94e30e8-kube-api-access-rh24q\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.732161 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/554709ef-9d18-4a19-aded-0c8fe94e30e8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.732182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-kolla-config\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.732206 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834375 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/554709ef-9d18-4a19-aded-0c8fe94e30e8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834452 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/554709ef-9d18-4a19-aded-0c8fe94e30e8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834525 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-config-data-default\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834581 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834677 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh24q\" (UniqueName: \"kubernetes.io/projected/554709ef-9d18-4a19-aded-0c8fe94e30e8-kube-api-access-rh24q\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834710 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/554709ef-9d18-4a19-aded-0c8fe94e30e8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834739 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-kolla-config\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.834772 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.835366 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.835871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/554709ef-9d18-4a19-aded-0c8fe94e30e8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.836714 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-config-data-default\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.837762 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.838262 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/554709ef-9d18-4a19-aded-0c8fe94e30e8-kolla-config\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.859075 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/554709ef-9d18-4a19-aded-0c8fe94e30e8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.870969 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh24q\" (UniqueName: \"kubernetes.io/projected/554709ef-9d18-4a19-aded-0c8fe94e30e8-kube-api-access-rh24q\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.871241 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/554709ef-9d18-4a19-aded-0c8fe94e30e8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.880283 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"554709ef-9d18-4a19-aded-0c8fe94e30e8\") " pod="openstack/openstack-galera-0" Nov 21 13:53:09 crc kubenswrapper[4904]: I1121 13:53:09.962013 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 21 13:53:10 crc kubenswrapper[4904]: I1121 13:53:10.600056 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"914527f1-8202-44fc-bbbb-4c39cf793a7b","Type":"ContainerStarted","Data":"12d9fbb9e3377b9700d1dd4ea9d4ef575822eb9f775698eb9203b4173232edb6"} Nov 21 13:53:10 crc kubenswrapper[4904]: I1121 13:53:10.941479 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 13:53:10 crc kubenswrapper[4904]: I1121 13:53:10.944095 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:10 crc kubenswrapper[4904]: I1121 13:53:10.981361 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pr7sf" Nov 21 13:53:10 crc kubenswrapper[4904]: I1121 13:53:10.986181 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 21 13:53:10 crc kubenswrapper[4904]: I1121 13:53:10.990917 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.032457 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.044637 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.063259 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.109679 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.109832 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.116821 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.117102 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.117680 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gmqsd" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.144953 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2987646d-06ff-44a7-b766-ff6ff19ed796-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145086 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145130 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2987646d-06ff-44a7-b766-ff6ff19ed796-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145168 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2987646d-06ff-44a7-b766-ff6ff19ed796-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145236 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145295 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmtw\" (UniqueName: \"kubernetes.io/projected/2987646d-06ff-44a7-b766-ff6ff19ed796-kube-api-access-pxmtw\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.145328 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2987646d-06ff-44a7-b766-ff6ff19ed796-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248167 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248192 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f2e9b6a-f2dd-4647-9652-e5f609740a53-kolla-config\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248217 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2987646d-06ff-44a7-b766-ff6ff19ed796-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248265 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgh2h\" (UniqueName: \"kubernetes.io/projected/3f2e9b6a-f2dd-4647-9652-e5f609740a53-kube-api-access-sgh2h\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248308 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmtw\" (UniqueName: \"kubernetes.io/projected/2987646d-06ff-44a7-b766-ff6ff19ed796-kube-api-access-pxmtw\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248329 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2e9b6a-f2dd-4647-9652-e5f609740a53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248349 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248380 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f2e9b6a-f2dd-4647-9652-e5f609740a53-config-data\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248397 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2987646d-06ff-44a7-b766-ff6ff19ed796-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248438 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2e9b6a-f2dd-4647-9652-e5f609740a53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.248480 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.250108 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.252140 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.310783 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.350190 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f2e9b6a-f2dd-4647-9652-e5f609740a53-config-data\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.350276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2e9b6a-f2dd-4647-9652-e5f609740a53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.350355 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f2e9b6a-f2dd-4647-9652-e5f609740a53-kolla-config\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.350400 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgh2h\" (UniqueName: \"kubernetes.io/projected/3f2e9b6a-f2dd-4647-9652-e5f609740a53-kube-api-access-sgh2h\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.350452 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2e9b6a-f2dd-4647-9652-e5f609740a53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.351604 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f2e9b6a-f2dd-4647-9652-e5f609740a53-kolla-config\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.353975 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f2e9b6a-f2dd-4647-9652-e5f609740a53-config-data\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.393673 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.397342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2987646d-06ff-44a7-b766-ff6ff19ed796-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.398130 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f2e9b6a-f2dd-4647-9652-e5f609740a53-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.398130 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgh2h\" (UniqueName: \"kubernetes.io/projected/3f2e9b6a-f2dd-4647-9652-e5f609740a53-kube-api-access-sgh2h\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.399387 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2987646d-06ff-44a7-b766-ff6ff19ed796-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.400253 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2987646d-06ff-44a7-b766-ff6ff19ed796-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.400455 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2987646d-06ff-44a7-b766-ff6ff19ed796-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.400706 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2e9b6a-f2dd-4647-9652-e5f609740a53-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3f2e9b6a-f2dd-4647-9652-e5f609740a53\") " pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.407622 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmtw\" (UniqueName: \"kubernetes.io/projected/2987646d-06ff-44a7-b766-ff6ff19ed796-kube-api-access-pxmtw\") pod \"openstack-cell1-galera-0\" (UID: \"2987646d-06ff-44a7-b766-ff6ff19ed796\") " pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.409421 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 21 13:53:11 crc kubenswrapper[4904]: W1121 13:53:11.422497 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod554709ef_9d18_4a19_aded_0c8fe94e30e8.slice/crio-522e1c9e8c47dc12543171d3e0ccf2dd324d7d93724ab026caa3f8d37145f726 WatchSource:0}: Error finding container 522e1c9e8c47dc12543171d3e0ccf2dd324d7d93724ab026caa3f8d37145f726: Status 404 returned error can't find the container with id 522e1c9e8c47dc12543171d3e0ccf2dd324d7d93724ab026caa3f8d37145f726 Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.493447 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.615501 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"554709ef-9d18-4a19-aded-0c8fe94e30e8","Type":"ContainerStarted","Data":"522e1c9e8c47dc12543171d3e0ccf2dd324d7d93724ab026caa3f8d37145f726"} Nov 21 13:53:11 crc kubenswrapper[4904]: I1121 13:53:11.621727 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 21 13:53:12 crc kubenswrapper[4904]: I1121 13:53:12.407801 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 21 13:53:12 crc kubenswrapper[4904]: I1121 13:53:12.533363 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 21 13:53:12 crc kubenswrapper[4904]: I1121 13:53:12.640865 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2987646d-06ff-44a7-b766-ff6ff19ed796","Type":"ContainerStarted","Data":"352128fd3e27281e72a1987e65e7c252a2f04fca867bdce34cd242f0a6db8db4"} Nov 21 13:53:12 crc kubenswrapper[4904]: I1121 13:53:12.652149 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3f2e9b6a-f2dd-4647-9652-e5f609740a53","Type":"ContainerStarted","Data":"c18762475e29396921ff5f0e41fd69189d743058e4f99ceb5b20e51eab0814bb"} Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.243082 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.259115 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.266881 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-44vc8" Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.271267 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.407409 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpmd\" (UniqueName: \"kubernetes.io/projected/4c951951-b705-4ab1-b041-887d038f35ec-kube-api-access-xcpmd\") pod \"kube-state-metrics-0\" (UID: \"4c951951-b705-4ab1-b041-887d038f35ec\") " pod="openstack/kube-state-metrics-0" Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.511922 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcpmd\" (UniqueName: \"kubernetes.io/projected/4c951951-b705-4ab1-b041-887d038f35ec-kube-api-access-xcpmd\") pod \"kube-state-metrics-0\" (UID: \"4c951951-b705-4ab1-b041-887d038f35ec\") " pod="openstack/kube-state-metrics-0" Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.555010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcpmd\" (UniqueName: \"kubernetes.io/projected/4c951951-b705-4ab1-b041-887d038f35ec-kube-api-access-xcpmd\") pod \"kube-state-metrics-0\" (UID: \"4c951951-b705-4ab1-b041-887d038f35ec\") " pod="openstack/kube-state-metrics-0" Nov 21 13:53:13 crc kubenswrapper[4904]: I1121 13:53:13.603846 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.375934 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.384185 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.387926 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.389764 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-7txm6" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.427860 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.570284 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6b29\" (UniqueName: \"kubernetes.io/projected/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-kube-api-access-b6b29\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.570341 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.582641 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.624357 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.627961 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.637229 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nw542" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.637453 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.637617 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.637792 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.638016 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.638193 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.676819 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f0496532-016f-4090-9d82-1b500e179cd1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.676918 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.676986 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677019 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f0496532-016f-4090-9d82-1b500e179cd1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677061 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6b29\" (UniqueName: \"kubernetes.io/projected/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-kube-api-access-b6b29\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677081 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677117 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hns\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-kube-api-access-78hns\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677142 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-config\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.677178 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: E1121 13:53:14.682257 4904 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 21 13:53:14 crc kubenswrapper[4904]: E1121 13:53:14.682335 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-serving-cert podName:6a4ca754-03de-434a-b5ba-3f0d288b1e0c nodeName:}" failed. No retries permitted until 2025-11-21 13:53:15.182307283 +0000 UTC m=+1269.303839835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-nfqjt" (UID: "6a4ca754-03de-434a-b5ba-3f0d288b1e0c") : secret "observability-ui-dashboards" not found Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.684743 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.731979 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6b29\" (UniqueName: \"kubernetes.io/projected/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-kube-api-access-b6b29\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781245 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f0496532-016f-4090-9d82-1b500e179cd1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781344 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hns\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-kube-api-access-78hns\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781373 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781392 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-config\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781412 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781442 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f0496532-016f-4090-9d82-1b500e179cd1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781483 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.781533 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.795095 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f0496532-016f-4090-9d82-1b500e179cd1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.796551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.807893 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-config\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.812670 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f0496532-016f-4090-9d82-1b500e179cd1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.813510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.845150 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hns\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-kube-api-access-78hns\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.855428 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.855520 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-94fdb89dc-xcd5x"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.857061 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.870668 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-94fdb89dc-xcd5x"] Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.885561 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:53:14 crc kubenswrapper[4904]: I1121 13:53:14.885615 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d7b83180c0cc9ea158a959924c2564cac05e2804c5457e8d57944755befee3f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011534 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-service-ca\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011584 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-oauth-serving-cert\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-serving-cert\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011645 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-trusted-ca-bundle\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011846 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbfj\" (UniqueName: \"kubernetes.io/projected/439ac172-d63e-4a04-a526-cc2c6e4e1685-kube-api-access-xdbfj\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-oauth-config\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.011907 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-config\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.022843 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116079 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbfj\" (UniqueName: \"kubernetes.io/projected/439ac172-d63e-4a04-a526-cc2c6e4e1685-kube-api-access-xdbfj\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116198 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-oauth-config\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116230 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-config\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116441 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-service-ca\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-oauth-serving-cert\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116501 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-serving-cert\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.116599 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-trusted-ca-bundle\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.118128 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-oauth-serving-cert\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.118203 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-config\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.118427 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-service-ca\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.123558 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/439ac172-d63e-4a04-a526-cc2c6e4e1685-trusted-ca-bundle\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.123585 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-serving-cert\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.123723 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/439ac172-d63e-4a04-a526-cc2c6e4e1685-console-oauth-config\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.135468 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbfj\" (UniqueName: \"kubernetes.io/projected/439ac172-d63e-4a04-a526-cc2c6e4e1685-kube-api-access-xdbfj\") pod \"console-94fdb89dc-xcd5x\" (UID: \"439ac172-d63e-4a04-a526-cc2c6e4e1685\") " pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.218295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.222841 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4ca754-03de-434a-b5ba-3f0d288b1e0c-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-nfqjt\" (UID: \"6a4ca754-03de-434a-b5ba-3f0d288b1e0c\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.269916 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.287534 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:15 crc kubenswrapper[4904]: I1121 13:53:15.326939 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.054892 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.057276 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.061579 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.061753 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-w7926" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.062030 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.062264 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.067081 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.077807 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.176731 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.176807 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.176830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.177204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.177298 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.177347 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.177433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.177715 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxwrv\" (UniqueName: \"kubernetes.io/projected/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-kube-api-access-vxwrv\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.253851 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gh8lv"] Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.255245 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.258467 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.261439 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v6w29" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.266523 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-jt79x"] Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.269111 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.269760 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.279386 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gh8lv"] Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280542 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxwrv\" (UniqueName: \"kubernetes.io/projected/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-kube-api-access-vxwrv\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280592 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280667 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280720 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280746 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280780 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.280809 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.281646 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.282402 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.282868 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.282923 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.289825 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.290553 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.292308 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jt79x"] Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.295118 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.312169 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxwrv\" (UniqueName: \"kubernetes.io/projected/1ac2319f-b7ee-441c-b325-5ca2d83c87e4-kube-api-access-vxwrv\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.373075 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1ac2319f-b7ee-441c-b325-5ca2d83c87e4\") " pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.382883 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-lib\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.382961 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-run\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383002 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728km\" (UniqueName: \"kubernetes.io/projected/6f19242e-f99f-4408-9c53-7a92a8c191bc-kube-api-access-728km\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383023 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ec880a-b9f8-4b7a-9d69-98b730a07a02-combined-ca-bundle\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383049 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f19242e-f99f-4408-9c53-7a92a8c191bc-scripts\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383079 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ec880a-b9f8-4b7a-9d69-98b730a07a02-scripts\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383098 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-etc-ovs\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383136 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-run\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383158 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-log\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383178 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7v9n\" (UniqueName: \"kubernetes.io/projected/48ec880a-b9f8-4b7a-9d69-98b730a07a02-kube-api-access-l7v9n\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383194 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-run-ovn\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383223 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-log-ovn\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.383247 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ec880a-b9f8-4b7a-9d69-98b730a07a02-ovn-controller-tls-certs\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.392665 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-lib\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486322 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-run\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-728km\" (UniqueName: \"kubernetes.io/projected/6f19242e-f99f-4408-9c53-7a92a8c191bc-kube-api-access-728km\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486402 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ec880a-b9f8-4b7a-9d69-98b730a07a02-combined-ca-bundle\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486435 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f19242e-f99f-4408-9c53-7a92a8c191bc-scripts\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486472 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ec880a-b9f8-4b7a-9d69-98b730a07a02-scripts\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-etc-ovs\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486531 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-run\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486579 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-log\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486605 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7v9n\" (UniqueName: \"kubernetes.io/projected/48ec880a-b9f8-4b7a-9d69-98b730a07a02-kube-api-access-l7v9n\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-run-ovn\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486686 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-log-ovn\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.486720 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ec880a-b9f8-4b7a-9d69-98b730a07a02-ovn-controller-tls-certs\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.487008 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-lib\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.487141 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-run\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.487141 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-run\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.487317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-run-ovn\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.487478 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/48ec880a-b9f8-4b7a-9d69-98b730a07a02-var-log-ovn\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.488009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-etc-ovs\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.488065 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f19242e-f99f-4408-9c53-7a92a8c191bc-var-log\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.490592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ec880a-b9f8-4b7a-9d69-98b730a07a02-scripts\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.491155 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f19242e-f99f-4408-9c53-7a92a8c191bc-scripts\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.498024 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/48ec880a-b9f8-4b7a-9d69-98b730a07a02-ovn-controller-tls-certs\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.513791 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-728km\" (UniqueName: \"kubernetes.io/projected/6f19242e-f99f-4408-9c53-7a92a8c191bc-kube-api-access-728km\") pod \"ovn-controller-ovs-jt79x\" (UID: \"6f19242e-f99f-4408-9c53-7a92a8c191bc\") " pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.513928 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48ec880a-b9f8-4b7a-9d69-98b730a07a02-combined-ca-bundle\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.516984 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7v9n\" (UniqueName: \"kubernetes.io/projected/48ec880a-b9f8-4b7a-9d69-98b730a07a02-kube-api-access-l7v9n\") pod \"ovn-controller-gh8lv\" (UID: \"48ec880a-b9f8-4b7a-9d69-98b730a07a02\") " pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.596867 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:17 crc kubenswrapper[4904]: I1121 13:53:17.675244 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:53:20 crc kubenswrapper[4904]: I1121 13:53:20.886475 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c951951-b705-4ab1-b041-887d038f35ec","Type":"ContainerStarted","Data":"0f5b1aafb68eda6b6f2a7806c503378c5eb49ea3426aca72eee21aa1fa832e6d"} Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.014291 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.016969 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.019899 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.019978 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.019993 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-qwjs6" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.020229 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.030246 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.168487 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq22w\" (UniqueName: \"kubernetes.io/projected/b7f4b26a-f41d-478c-b706-67baa265aaf8-kube-api-access-pq22w\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169005 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169046 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f4b26a-f41d-478c-b706-67baa265aaf8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169089 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169123 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f4b26a-f41d-478c-b706-67baa265aaf8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169185 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b7f4b26a-f41d-478c-b706-67baa265aaf8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169259 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.169283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271221 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f4b26a-f41d-478c-b706-67baa265aaf8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271280 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271307 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f4b26a-f41d-478c-b706-67baa265aaf8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271347 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b7f4b26a-f41d-478c-b706-67baa265aaf8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271393 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271413 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271460 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq22w\" (UniqueName: \"kubernetes.io/projected/b7f4b26a-f41d-478c-b706-67baa265aaf8-kube-api-access-pq22w\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.271866 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.274848 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7f4b26a-f41d-478c-b706-67baa265aaf8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.276306 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f4b26a-f41d-478c-b706-67baa265aaf8-config\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.279201 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b7f4b26a-f41d-478c-b706-67baa265aaf8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.304080 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.304800 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.309721 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7f4b26a-f41d-478c-b706-67baa265aaf8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.324676 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq22w\" (UniqueName: \"kubernetes.io/projected/b7f4b26a-f41d-478c-b706-67baa265aaf8-kube-api-access-pq22w\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.343114 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b7f4b26a-f41d-478c-b706-67baa265aaf8\") " pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:21 crc kubenswrapper[4904]: I1121 13:53:21.369427 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 21 13:53:26 crc kubenswrapper[4904]: I1121 13:53:26.198261 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-jt79x"] Nov 21 13:53:28 crc kubenswrapper[4904]: I1121 13:53:28.114821 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:53:28 crc kubenswrapper[4904]: I1121 13:53:28.114917 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.621175 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.622311 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rh24q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(554709ef-9d18-4a19-aded-0c8fe94e30e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.623689 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="554709ef-9d18-4a19-aded-0c8fe94e30e8" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.642098 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.642430 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqpxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f72ba976-6eb5-4886-81fc-3f7e4563039d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.643732 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.692136 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.692707 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxmtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(2987646d-06ff-44a7-b766-ff6ff19ed796): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:35 crc kubenswrapper[4904]: E1121 13:53:35.693949 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="2987646d-06ff-44a7-b766-ff6ff19ed796" Nov 21 13:53:36 crc kubenswrapper[4904]: E1121 13:53:36.047612 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="2987646d-06ff-44a7-b766-ff6ff19ed796" Nov 21 13:53:36 crc kubenswrapper[4904]: E1121 13:53:36.047730 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" Nov 21 13:53:36 crc kubenswrapper[4904]: E1121 13:53:36.047845 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="554709ef-9d18-4a19-aded-0c8fe94e30e8" Nov 21 13:53:36 crc kubenswrapper[4904]: I1121 13:53:36.869720 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:53:36 crc kubenswrapper[4904]: E1121 13:53:36.917679 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 21 13:53:36 crc kubenswrapper[4904]: E1121 13:53:36.917931 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n75h97h65fh57bh675hfchbdh67fhb5hb6h5c6hf5h684h57h5fch556h66dh67h98h58h68fh57bh8fhd8h67h5cdh588h66ch567h66fh5b9hd9q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sgh2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(3f2e9b6a-f2dd-4647-9652-e5f609740a53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:36 crc kubenswrapper[4904]: E1121 13:53:36.919359 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="3f2e9b6a-f2dd-4647-9652-e5f609740a53" Nov 21 13:53:37 crc kubenswrapper[4904]: I1121 13:53:37.058331 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jt79x" event={"ID":"6f19242e-f99f-4408-9c53-7a92a8c191bc","Type":"ContainerStarted","Data":"18a47f1937a443b00b9a53f977f7028ac1b1049135d80899f348893418aeb311"} Nov 21 13:53:37 crc kubenswrapper[4904]: E1121 13:53:37.068592 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="3f2e9b6a-f2dd-4647-9652-e5f609740a53" Nov 21 13:53:38 crc kubenswrapper[4904]: E1121 13:53:38.028467 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:53:38 crc kubenswrapper[4904]: E1121 13:53:38.029466 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h48mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-f2725_openstack(8fdbed90-b15e-45d4-97f3-a76cd9a36d35): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:38 crc kubenswrapper[4904]: E1121 13:53:38.031083 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" podUID="8fdbed90-b15e-45d4-97f3-a76cd9a36d35" Nov 21 13:53:38 crc kubenswrapper[4904]: E1121 13:53:38.233112 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:53:38 crc kubenswrapper[4904]: E1121 13:53:38.233412 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vkg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-c5qt7_openstack(59898c2d-078d-48ba-8dfa-81604e47ce24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:38 crc kubenswrapper[4904]: E1121 13:53:38.234799 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.590687 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.660976 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.684968 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt"] Nov 21 13:53:38 crc kubenswrapper[4904]: W1121 13:53:38.691158 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ec880a_b9f8_4b7a_9d69_98b730a07a02.slice/crio-38498f287565f1d73842d85002486719bc944d4c9cd185d1e2b4e562a0bcd688 WatchSource:0}: Error finding container 38498f287565f1d73842d85002486719bc944d4c9cd185d1e2b4e562a0bcd688: Status 404 returned error can't find the container with id 38498f287565f1d73842d85002486719bc944d4c9cd185d1e2b4e562a0bcd688 Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.693044 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gh8lv"] Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.700879 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.754045 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h48mt\" (UniqueName: \"kubernetes.io/projected/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-kube-api-access-h48mt\") pod \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.754170 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-config\") pod \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\" (UID: \"8fdbed90-b15e-45d4-97f3-a76cd9a36d35\") " Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.754926 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-config" (OuterVolumeSpecName: "config") pod "8fdbed90-b15e-45d4-97f3-a76cd9a36d35" (UID: "8fdbed90-b15e-45d4-97f3-a76cd9a36d35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.762505 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-kube-api-access-h48mt" (OuterVolumeSpecName: "kube-api-access-h48mt") pod "8fdbed90-b15e-45d4-97f3-a76cd9a36d35" (UID: "8fdbed90-b15e-45d4-97f3-a76cd9a36d35"). InnerVolumeSpecName "kube-api-access-h48mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.859347 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.859395 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h48mt\" (UniqueName: \"kubernetes.io/projected/8fdbed90-b15e-45d4-97f3-a76cd9a36d35-kube-api-access-h48mt\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.869100 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-94fdb89dc-xcd5x"] Nov 21 13:53:38 crc kubenswrapper[4904]: W1121 13:53:38.954416 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod439ac172_d63e_4a04_a526_cc2c6e4e1685.slice/crio-c9464559581617a5b8f939f998e1bc9583960a49a9a5e35fb97115d91ae603c0 WatchSource:0}: Error finding container c9464559581617a5b8f939f998e1bc9583960a49a9a5e35fb97115d91ae603c0: Status 404 returned error can't find the container with id c9464559581617a5b8f939f998e1bc9583960a49a9a5e35fb97115d91ae603c0 Nov 21 13:53:38 crc kubenswrapper[4904]: I1121 13:53:38.989200 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.093794 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-94fdb89dc-xcd5x" event={"ID":"439ac172-d63e-4a04-a526-cc2c6e4e1685","Type":"ContainerStarted","Data":"c9464559581617a5b8f939f998e1bc9583960a49a9a5e35fb97115d91ae603c0"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.096335 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv" event={"ID":"48ec880a-b9f8-4b7a-9d69-98b730a07a02","Type":"ContainerStarted","Data":"38498f287565f1d73842d85002486719bc944d4c9cd185d1e2b4e562a0bcd688"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.098112 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1ac2319f-b7ee-441c-b325-5ca2d83c87e4","Type":"ContainerStarted","Data":"6bdb7267127b569ab177875947a2dde8ad8268f7e939e0441a14cc99494cc2ad"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.099577 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" event={"ID":"6a4ca754-03de-434a-b5ba-3f0d288b1e0c","Type":"ContainerStarted","Data":"3d7a821976ac2ce7a8fe528e97938050202f59d0e8af2c618499a2da7d94828b"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.100841 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b7f4b26a-f41d-478c-b706-67baa265aaf8","Type":"ContainerStarted","Data":"737cf9e9493e0b2d943c122a5f6f9c5c610e47b05c86f5c07cf36edd9bb9913c"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.102539 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerStarted","Data":"2c2862ac8c2065b179daee676b662ecffc87ccb9ffaf91ff1d1c908e18d672b6"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.105053 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" event={"ID":"8fdbed90-b15e-45d4-97f3-a76cd9a36d35","Type":"ContainerDied","Data":"a70f8396f4831fd5d07e23e480034b6dd2272bccc54e95cafbda54f774b514ea"} Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.105146 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f2725" Nov 21 13:53:39 crc kubenswrapper[4904]: E1121 13:53:39.105771 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" Nov 21 13:53:39 crc kubenswrapper[4904]: E1121 13:53:39.150983 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:53:39 crc kubenswrapper[4904]: E1121 13:53:39.151678 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gpcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xdlcj_openstack(c4099b60-5765-4105-b20b-3308b43df36e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:39 crc kubenswrapper[4904]: E1121 13:53:39.152911 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" podUID="c4099b60-5765-4105-b20b-3308b43df36e" Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.535349 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2725"] Nov 21 13:53:39 crc kubenswrapper[4904]: I1121 13:53:39.535435 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f2725"] Nov 21 13:53:40 crc kubenswrapper[4904]: I1121 13:53:40.117073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-94fdb89dc-xcd5x" event={"ID":"439ac172-d63e-4a04-a526-cc2c6e4e1685","Type":"ContainerStarted","Data":"f4cfb4a1a62a2f47744f95c4d86a6e58e0e4eb73a3893f0972a1bbde19e17487"} Nov 21 13:53:40 crc kubenswrapper[4904]: I1121 13:53:40.121170 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"914527f1-8202-44fc-bbbb-4c39cf793a7b","Type":"ContainerStarted","Data":"9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177"} Nov 21 13:53:40 crc kubenswrapper[4904]: E1121 13:53:40.121976 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" podUID="c4099b60-5765-4105-b20b-3308b43df36e" Nov 21 13:53:40 crc kubenswrapper[4904]: I1121 13:53:40.150879 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-94fdb89dc-xcd5x" podStartSLOduration=26.150843198 podStartE2EDuration="26.150843198s" podCreationTimestamp="2025-11-21 13:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:53:40.137244043 +0000 UTC m=+1294.258776665" watchObservedRunningTime="2025-11-21 13:53:40.150843198 +0000 UTC m=+1294.272375770" Nov 21 13:53:40 crc kubenswrapper[4904]: I1121 13:53:40.542526 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fdbed90-b15e-45d4-97f3-a76cd9a36d35" path="/var/lib/kubelet/pods/8fdbed90-b15e-45d4-97f3-a76cd9a36d35/volumes" Nov 21 13:53:43 crc kubenswrapper[4904]: E1121 13:53:43.017891 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 21 13:53:43 crc kubenswrapper[4904]: E1121 13:53:43.018448 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rff98,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9ckps_openstack(e4259de2-ba62-4476-8a7f-f143b38daab8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:53:43 crc kubenswrapper[4904]: E1121 13:53:43.020557 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" podUID="e4259de2-ba62-4476-8a7f-f143b38daab8" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.288879 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.289468 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.295340 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.875385 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.941232 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rff98\" (UniqueName: \"kubernetes.io/projected/e4259de2-ba62-4476-8a7f-f143b38daab8-kube-api-access-rff98\") pod \"e4259de2-ba62-4476-8a7f-f143b38daab8\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.941334 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-dns-svc\") pod \"e4259de2-ba62-4476-8a7f-f143b38daab8\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.941691 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-config\") pod \"e4259de2-ba62-4476-8a7f-f143b38daab8\" (UID: \"e4259de2-ba62-4476-8a7f-f143b38daab8\") " Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.942966 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-config" (OuterVolumeSpecName: "config") pod "e4259de2-ba62-4476-8a7f-f143b38daab8" (UID: "e4259de2-ba62-4476-8a7f-f143b38daab8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.944474 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4259de2-ba62-4476-8a7f-f143b38daab8" (UID: "e4259de2-ba62-4476-8a7f-f143b38daab8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:53:45 crc kubenswrapper[4904]: I1121 13:53:45.951772 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4259de2-ba62-4476-8a7f-f143b38daab8-kube-api-access-rff98" (OuterVolumeSpecName: "kube-api-access-rff98") pod "e4259de2-ba62-4476-8a7f-f143b38daab8" (UID: "e4259de2-ba62-4476-8a7f-f143b38daab8"). InnerVolumeSpecName "kube-api-access-rff98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.047275 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.047316 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rff98\" (UniqueName: \"kubernetes.io/projected/e4259de2-ba62-4476-8a7f-f143b38daab8-kube-api-access-rff98\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.047330 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4259de2-ba62-4476-8a7f-f143b38daab8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.183181 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.183216 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9ckps" event={"ID":"e4259de2-ba62-4476-8a7f-f143b38daab8","Type":"ContainerDied","Data":"44ed9b5fe7cac777a8b63ab9f073073fbb986593d8313106fca312b3d610726f"} Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.187861 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-94fdb89dc-xcd5x" Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.277966 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f55c9947c-tc6x6"] Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.309326 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9ckps"] Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.318224 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9ckps"] Nov 21 13:53:46 crc kubenswrapper[4904]: I1121 13:53:46.529428 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4259de2-ba62-4476-8a7f-f143b38daab8" path="/var/lib/kubelet/pods/e4259de2-ba62-4476-8a7f-f143b38daab8/volumes" Nov 21 13:53:53 crc kubenswrapper[4904]: I1121 13:53:53.293975 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1ac2319f-b7ee-441c-b325-5ca2d83c87e4","Type":"ContainerStarted","Data":"512a526ca8faf7aa112e4b3c09341426243cb8be46f253ee9922e3a7c446f519"} Nov 21 13:53:53 crc kubenswrapper[4904]: I1121 13:53:53.298822 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" event={"ID":"6a4ca754-03de-434a-b5ba-3f0d288b1e0c","Type":"ContainerStarted","Data":"1be2f4b6a31dcd5dffbcddf0ed3568855961d91c3a23c2e82fa920a638692ed0"} Nov 21 13:53:53 crc kubenswrapper[4904]: I1121 13:53:53.315929 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-nfqjt" podStartSLOduration=28.717140785 podStartE2EDuration="39.315908975s" podCreationTimestamp="2025-11-21 13:53:14 +0000 UTC" firstStartedPulling="2025-11-21 13:53:38.691307745 +0000 UTC m=+1292.812840297" lastFinishedPulling="2025-11-21 13:53:49.290075935 +0000 UTC m=+1303.411608487" observedRunningTime="2025-11-21 13:53:53.314332927 +0000 UTC m=+1307.435865479" watchObservedRunningTime="2025-11-21 13:53:53.315908975 +0000 UTC m=+1307.437441527" Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.314889 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3f2e9b6a-f2dd-4647-9652-e5f609740a53","Type":"ContainerStarted","Data":"98b18bfb3e48e047c15d925613eec65c6fcce20872b710a0626c135b7aafb340"} Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.316061 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.318957 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2987646d-06ff-44a7-b766-ff6ff19ed796","Type":"ContainerStarted","Data":"a0f6b0d3d075b5d733ccde4d3f246100b2b55b3e36efea9951b3866cd3bd4d8f"} Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.322430 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b7f4b26a-f41d-478c-b706-67baa265aaf8","Type":"ContainerStarted","Data":"d9ae1ec54405e782840ae6bb3ccedb6c6bb775a15ae85378b6016a1de5a44c83"} Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.323737 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"554709ef-9d18-4a19-aded-0c8fe94e30e8","Type":"ContainerStarted","Data":"00f955e90cca2985a8f16717ac33f3bb948df6c1fe96a41b53c1efc92a6c7d62"} Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.325980 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv" event={"ID":"48ec880a-b9f8-4b7a-9d69-98b730a07a02","Type":"ContainerStarted","Data":"5e931782524b16734eb0338063768f28cdbc46808a7a65088d03b7f2acb71834"} Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.326213 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gh8lv" Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.327323 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jt79x" event={"ID":"6f19242e-f99f-4408-9c53-7a92a8c191bc","Type":"ContainerStarted","Data":"eb4cd7c5f38990dcc91809b9b444e534645b3601efb4a9dada5c6e2346893e8b"} Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.338615 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.555192808 podStartE2EDuration="44.3385926s" podCreationTimestamp="2025-11-21 13:53:10 +0000 UTC" firstStartedPulling="2025-11-21 13:53:12.457060228 +0000 UTC m=+1266.578592780" lastFinishedPulling="2025-11-21 13:53:52.24046002 +0000 UTC m=+1306.361992572" observedRunningTime="2025-11-21 13:53:54.337159475 +0000 UTC m=+1308.458692027" watchObservedRunningTime="2025-11-21 13:53:54.3385926 +0000 UTC m=+1308.460125152" Nov 21 13:53:54 crc kubenswrapper[4904]: I1121 13:53:54.388128 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gh8lv" podStartSLOduration=24.668424787 podStartE2EDuration="37.388102031s" podCreationTimestamp="2025-11-21 13:53:17 +0000 UTC" firstStartedPulling="2025-11-21 13:53:38.696610977 +0000 UTC m=+1292.818143529" lastFinishedPulling="2025-11-21 13:53:51.416288231 +0000 UTC m=+1305.537820773" observedRunningTime="2025-11-21 13:53:54.387382233 +0000 UTC m=+1308.508914805" watchObservedRunningTime="2025-11-21 13:53:54.388102031 +0000 UTC m=+1308.509634583" Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.345003 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f72ba976-6eb5-4886-81fc-3f7e4563039d","Type":"ContainerStarted","Data":"3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4"} Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.352025 4904 generic.go:334] "Generic (PLEG): container finished" podID="6f19242e-f99f-4408-9c53-7a92a8c191bc" containerID="eb4cd7c5f38990dcc91809b9b444e534645b3601efb4a9dada5c6e2346893e8b" exitCode=0 Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.352839 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jt79x" event={"ID":"6f19242e-f99f-4408-9c53-7a92a8c191bc","Type":"ContainerDied","Data":"eb4cd7c5f38990dcc91809b9b444e534645b3601efb4a9dada5c6e2346893e8b"} Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.357607 4904 generic.go:334] "Generic (PLEG): container finished" podID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerID="5e8d67443899f5b8cf6230b18defde35ec4e6af8d823213947dd2ad2c853741d" exitCode=0 Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.357704 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" event={"ID":"59898c2d-078d-48ba-8dfa-81604e47ce24","Type":"ContainerDied","Data":"5e8d67443899f5b8cf6230b18defde35ec4e6af8d823213947dd2ad2c853741d"} Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.360300 4904 generic.go:334] "Generic (PLEG): container finished" podID="c4099b60-5765-4105-b20b-3308b43df36e" containerID="61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502" exitCode=0 Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.360366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" event={"ID":"c4099b60-5765-4105-b20b-3308b43df36e","Type":"ContainerDied","Data":"61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502"} Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.367829 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c951951-b705-4ab1-b041-887d038f35ec","Type":"ContainerStarted","Data":"4135b5b1c43c893249862613ded08d1c2602a90d6ec1282a8c7bdb6aa24a31b6"} Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.368463 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 13:53:55 crc kubenswrapper[4904]: I1121 13:53:55.529110 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.634186531 podStartE2EDuration="42.529082389s" podCreationTimestamp="2025-11-21 13:53:13 +0000 UTC" firstStartedPulling="2025-11-21 13:53:20.770118776 +0000 UTC m=+1274.891651328" lastFinishedPulling="2025-11-21 13:53:53.665014634 +0000 UTC m=+1307.786547186" observedRunningTime="2025-11-21 13:53:55.502074628 +0000 UTC m=+1309.623607180" watchObservedRunningTime="2025-11-21 13:53:55.529082389 +0000 UTC m=+1309.650614931" Nov 21 13:53:56 crc kubenswrapper[4904]: I1121 13:53:56.380355 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" event={"ID":"59898c2d-078d-48ba-8dfa-81604e47ce24","Type":"ContainerStarted","Data":"06d6f32cb68234f453e5c3335681f1a60ae7a66908efe968e1bdc141c0fd28d6"} Nov 21 13:53:56 crc kubenswrapper[4904]: I1121 13:53:56.384441 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" event={"ID":"c4099b60-5765-4105-b20b-3308b43df36e","Type":"ContainerStarted","Data":"03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae"} Nov 21 13:53:57 crc kubenswrapper[4904]: I1121 13:53:57.402532 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jt79x" event={"ID":"6f19242e-f99f-4408-9c53-7a92a8c191bc","Type":"ContainerStarted","Data":"77385946e705a5f04b09182657a9967204705a57bc5564428678841dee157d19"} Nov 21 13:53:57 crc kubenswrapper[4904]: I1121 13:53:57.405763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerStarted","Data":"765505c8365ce0ec76632eff3fe092dee7bd56917c4ea7d46b65870b6b28d8c9"} Nov 21 13:53:57 crc kubenswrapper[4904]: I1121 13:53:57.459213 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" podStartSLOduration=-9223371986.395588 podStartE2EDuration="50.459188079s" podCreationTimestamp="2025-11-21 13:53:07 +0000 UTC" firstStartedPulling="2025-11-21 13:53:08.18195024 +0000 UTC m=+1262.303482792" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:53:57.450403744 +0000 UTC m=+1311.571936296" watchObservedRunningTime="2025-11-21 13:53:57.459188079 +0000 UTC m=+1311.580720631" Nov 21 13:53:57 crc kubenswrapper[4904]: I1121 13:53:57.473994 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" podStartSLOduration=5.184073201 podStartE2EDuration="51.473972111s" podCreationTimestamp="2025-11-21 13:53:06 +0000 UTC" firstStartedPulling="2025-11-21 13:53:08.011734336 +0000 UTC m=+1262.133266888" lastFinishedPulling="2025-11-21 13:53:54.301633236 +0000 UTC m=+1308.423165798" observedRunningTime="2025-11-21 13:53:57.467163534 +0000 UTC m=+1311.588696086" watchObservedRunningTime="2025-11-21 13:53:57.473972111 +0000 UTC m=+1311.595504653" Nov 21 13:53:57 crc kubenswrapper[4904]: I1121 13:53:57.553783 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:53:58 crc kubenswrapper[4904]: I1121 13:53:58.113533 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:53:58 crc kubenswrapper[4904]: I1121 13:53:58.114027 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.469504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-jt79x" event={"ID":"6f19242e-f99f-4408-9c53-7a92a8c191bc","Type":"ContainerStarted","Data":"fc1082baea9dfd5564d8d87da6a993c9f0964c9270b55f6ded907fcdb47e8b3d"} Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.470058 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.470079 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.473052 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1ac2319f-b7ee-441c-b325-5ca2d83c87e4","Type":"ContainerStarted","Data":"02bf0c6014dcb8484cf92fdbd6e3ddd81b3cc94299f0372bc66dded943b17017"} Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.477688 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b7f4b26a-f41d-478c-b706-67baa265aaf8","Type":"ContainerStarted","Data":"227a18386d0a1c1ceba811fd00874dfc768bad84515b497529358bf16992e71e"} Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.494844 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.498931 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-jt79x" podStartSLOduration=32.079092704 podStartE2EDuration="44.498911179s" podCreationTimestamp="2025-11-21 13:53:17 +0000 UTC" firstStartedPulling="2025-11-21 13:53:36.869353487 +0000 UTC m=+1290.990886039" lastFinishedPulling="2025-11-21 13:53:49.289171962 +0000 UTC m=+1303.410704514" observedRunningTime="2025-11-21 13:54:01.498859418 +0000 UTC m=+1315.620391990" watchObservedRunningTime="2025-11-21 13:54:01.498911179 +0000 UTC m=+1315.620443731" Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.562281 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.43945315 podStartE2EDuration="45.562254259s" podCreationTimestamp="2025-11-21 13:53:16 +0000 UTC" firstStartedPulling="2025-11-21 13:53:38.680324831 +0000 UTC m=+1292.801857383" lastFinishedPulling="2025-11-21 13:54:00.80312594 +0000 UTC m=+1314.924658492" observedRunningTime="2025-11-21 13:54:01.561367167 +0000 UTC m=+1315.682899729" watchObservedRunningTime="2025-11-21 13:54:01.562254259 +0000 UTC m=+1315.683786811" Nov 21 13:54:01 crc kubenswrapper[4904]: I1121 13:54:01.593618 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.869162138 podStartE2EDuration="42.593589845s" podCreationTimestamp="2025-11-21 13:53:19 +0000 UTC" firstStartedPulling="2025-11-21 13:53:39.05769676 +0000 UTC m=+1293.179229312" lastFinishedPulling="2025-11-21 13:54:00.782124467 +0000 UTC m=+1314.903657019" observedRunningTime="2025-11-21 13:54:01.591981326 +0000 UTC m=+1315.713513878" watchObservedRunningTime="2025-11-21 13:54:01.593589845 +0000 UTC m=+1315.715122397" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.210477 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.211483 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.394930 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.395377 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.587822 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.587948 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 21 13:54:02 crc kubenswrapper[4904]: I1121 13:54:02.692491 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdlcj"] Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.371329 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.448774 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.514604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.515385 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" podUID="c4099b60-5765-4105-b20b-3308b43df36e" containerName="dnsmasq-dns" containerID="cri-o://03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae" gracePeriod=10 Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.596012 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mr22l"] Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.598142 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.626537 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mr22l"] Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.626785 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.647984 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.650484 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.720890 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhml\" (UniqueName: \"kubernetes.io/projected/d32dbae7-cb70-4e14-be08-0c1f178d23de-kube-api-access-wfhml\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.720987 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-config\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.721007 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.825706 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfhml\" (UniqueName: \"kubernetes.io/projected/d32dbae7-cb70-4e14-be08-0c1f178d23de-kube-api-access-wfhml\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.825805 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-config\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.825827 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.827176 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.828382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-config\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:03 crc kubenswrapper[4904]: I1121 13:54:03.871110 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfhml\" (UniqueName: \"kubernetes.io/projected/d32dbae7-cb70-4e14-be08-0c1f178d23de-kube-api-access-wfhml\") pod \"dnsmasq-dns-7cb5889db5-mr22l\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.021042 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.045207 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mr22l"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.087191 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-rqx8c"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.090862 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.108904 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.129343 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-rqx8c"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.195575 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-tp8md"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.206843 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.216559 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.218942 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tp8md"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.241074 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.241389 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbp2\" (UniqueName: \"kubernetes.io/projected/122c6ad7-3901-480d-bcad-fae5e86fea9d-kube-api-access-7lbp2\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.241500 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.241598 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-config\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.257196 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.333736 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-rqx8c"] Nov 21 13:54:04 crc kubenswrapper[4904]: E1121 13:54:04.336184 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-7lbp2 ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" podUID="122c6ad7-3901-480d-bcad-fae5e86fea9d" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gpcb\" (UniqueName: \"kubernetes.io/projected/c4099b60-5765-4105-b20b-3308b43df36e-kube-api-access-9gpcb\") pod \"c4099b60-5765-4105-b20b-3308b43df36e\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345210 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-config\") pod \"c4099b60-5765-4105-b20b-3308b43df36e\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345286 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-dns-svc\") pod \"c4099b60-5765-4105-b20b-3308b43df36e\" (UID: \"c4099b60-5765-4105-b20b-3308b43df36e\") " Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-config\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345937 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpkrh\" (UniqueName: \"kubernetes.io/projected/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-kube-api-access-mpkrh\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.345965 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-combined-ca-bundle\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.346024 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-ovn-rundir\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.346096 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-config\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.346134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.346175 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-ovs-rundir\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.346200 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lbp2\" (UniqueName: \"kubernetes.io/projected/122c6ad7-3901-480d-bcad-fae5e86fea9d-kube-api-access-7lbp2\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.351040 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.351039 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.354872 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4099b60-5765-4105-b20b-3308b43df36e-kube-api-access-9gpcb" (OuterVolumeSpecName: "kube-api-access-9gpcb") pod "c4099b60-5765-4105-b20b-3308b43df36e" (UID: "c4099b60-5765-4105-b20b-3308b43df36e"). InnerVolumeSpecName "kube-api-access-9gpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.356921 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-config\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.362033 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 21 13:54:04 crc kubenswrapper[4904]: E1121 13:54:04.362825 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4099b60-5765-4105-b20b-3308b43df36e" containerName="dnsmasq-dns" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.362847 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4099b60-5765-4105-b20b-3308b43df36e" containerName="dnsmasq-dns" Nov 21 13:54:04 crc kubenswrapper[4904]: E1121 13:54:04.362861 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4099b60-5765-4105-b20b-3308b43df36e" containerName="init" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.362867 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4099b60-5765-4105-b20b-3308b43df36e" containerName="init" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.363032 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4099b60-5765-4105-b20b-3308b43df36e" containerName="dnsmasq-dns" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.364465 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.383532 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.383858 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.383911 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pw85r" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.384214 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.388992 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lbp2\" (UniqueName: \"kubernetes.io/projected/122c6ad7-3901-480d-bcad-fae5e86fea9d-kube-api-access-7lbp2\") pod \"dnsmasq-dns-74f6f696b9-rqx8c\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.405964 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.412792 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-8qd6n"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.414612 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.438272 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-ovs-rundir\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448131 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5208bbb4-5fe6-4507-9308-35202ce25115-scripts\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448169 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448197 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5208bbb4-5fe6-4507-9308-35202ce25115-config\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448215 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448238 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5208bbb4-5fe6-4507-9308-35202ce25115-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448311 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpkrh\" (UniqueName: \"kubernetes.io/projected/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-kube-api-access-mpkrh\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448337 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-combined-ca-bundle\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448365 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-ovn-rundir\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448393 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448423 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448448 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-config\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448440 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-ovs-rundir\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448536 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwc9\" (UniqueName: \"kubernetes.io/projected/5208bbb4-5fe6-4507-9308-35202ce25115-kube-api-access-rfwc9\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.448762 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gpcb\" (UniqueName: \"kubernetes.io/projected/c4099b60-5765-4105-b20b-3308b43df36e-kube-api-access-9gpcb\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.449354 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-ovn-rundir\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.449994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-config\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.456206 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8qd6n"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.460276 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.498502 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4099b60-5765-4105-b20b-3308b43df36e" (UID: "c4099b60-5765-4105-b20b-3308b43df36e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.532418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-combined-ca-bundle\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.539147 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-config" (OuterVolumeSpecName: "config") pod "c4099b60-5765-4105-b20b-3308b43df36e" (UID: "c4099b60-5765-4105-b20b-3308b43df36e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.546331 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpkrh\" (UniqueName: \"kubernetes.io/projected/c8dbdbe2-57e0-435a-ab3e-4dd526d584b3-kube-api-access-mpkrh\") pod \"ovn-controller-metrics-tp8md\" (UID: \"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3\") " pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.575571 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5208bbb4-5fe6-4507-9308-35202ce25115-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.575859 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.575918 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.575966 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.576030 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwc9\" (UniqueName: \"kubernetes.io/projected/5208bbb4-5fe6-4507-9308-35202ce25115-kube-api-access-rfwc9\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.576084 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-dns-svc\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.576347 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5208bbb4-5fe6-4507-9308-35202ce25115-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.584039 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-tp8md" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.585011 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.587479 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5208bbb4-5fe6-4507-9308-35202ce25115-scripts\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.587759 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.587822 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jd2n\" (UniqueName: \"kubernetes.io/projected/6e1635ad-9fa5-4e0c-8efd-c6127723feba-kube-api-access-8jd2n\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.587863 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5208bbb4-5fe6-4507-9308-35202ce25115-config\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.587886 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.587915 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-config\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.588151 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.588173 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4099b60-5765-4105-b20b-3308b43df36e-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.589349 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5208bbb4-5fe6-4507-9308-35202ce25115-scripts\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.603774 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.604968 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5208bbb4-5fe6-4507-9308-35202ce25115-config\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.611330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5208bbb4-5fe6-4507-9308-35202ce25115-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.619448 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwc9\" (UniqueName: \"kubernetes.io/projected/5208bbb4-5fe6-4507-9308-35202ce25115-kube-api-access-rfwc9\") pod \"ovn-northd-0\" (UID: \"5208bbb4-5fe6-4507-9308-35202ce25115\") " pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.630115 4904 generic.go:334] "Generic (PLEG): container finished" podID="2987646d-06ff-44a7-b766-ff6ff19ed796" containerID="a0f6b0d3d075b5d733ccde4d3f246100b2b55b3e36efea9951b3866cd3bd4d8f" exitCode=0 Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.630230 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2987646d-06ff-44a7-b766-ff6ff19ed796","Type":"ContainerDied","Data":"a0f6b0d3d075b5d733ccde4d3f246100b2b55b3e36efea9951b3866cd3bd4d8f"} Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.694419 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.696528 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.697452 4904 generic.go:334] "Generic (PLEG): container finished" podID="f0496532-016f-4090-9d82-1b500e179cd1" containerID="765505c8365ce0ec76632eff3fe092dee7bd56917c4ea7d46b65870b6b28d8c9" exitCode=0 Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.697487 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerDied","Data":"765505c8365ce0ec76632eff3fe092dee7bd56917c4ea7d46b65870b6b28d8c9"} Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.709578 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-dns-svc\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.710584 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.710617 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jd2n\" (UniqueName: \"kubernetes.io/projected/6e1635ad-9fa5-4e0c-8efd-c6127723feba-kube-api-access-8jd2n\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.710645 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-config\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.711248 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-config\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.710431 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-dns-svc\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.720722 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.725942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.767294 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.768572 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.794952 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.795386 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-228ss" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.795631 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.795775 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.797315 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.801115 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jd2n\" (UniqueName: \"kubernetes.io/projected/6e1635ad-9fa5-4e0c-8efd-c6127723feba-kube-api-access-8jd2n\") pod \"dnsmasq-dns-698758b865-8qd6n\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.826471 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.845045 4904 generic.go:334] "Generic (PLEG): container finished" podID="554709ef-9d18-4a19-aded-0c8fe94e30e8" containerID="00f955e90cca2985a8f16717ac33f3bb948df6c1fe96a41b53c1efc92a6c7d62" exitCode=0 Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.845207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"554709ef-9d18-4a19-aded-0c8fe94e30e8","Type":"ContainerDied","Data":"00f955e90cca2985a8f16717ac33f3bb948df6c1fe96a41b53c1efc92a6c7d62"} Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.906080 4904 generic.go:334] "Generic (PLEG): container finished" podID="c4099b60-5765-4105-b20b-3308b43df36e" containerID="03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae" exitCode=0 Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.906244 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.906702 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" event={"ID":"c4099b60-5765-4105-b20b-3308b43df36e","Type":"ContainerDied","Data":"03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae"} Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.906768 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" event={"ID":"c4099b60-5765-4105-b20b-3308b43df36e","Type":"ContainerDied","Data":"ccd48cf0274dee650a08d799bea2ad700c1f54e67766f4a2a565d0d399882ca8"} Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.906796 4904 scope.go:117] "RemoveContainer" containerID="03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.906870 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xdlcj" Nov 21 13:54:04 crc kubenswrapper[4904]: I1121 13:54:04.988819 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.005809 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mr22l"] Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.015213 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdlcj"] Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.017639 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-lock\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.017754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.019178 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-cache\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.019305 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.019351 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gddp8\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-kube-api-access-gddp8\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.021774 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xdlcj"] Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.053341 4904 scope.go:117] "RemoveContainer" containerID="61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.120462 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lbp2\" (UniqueName: \"kubernetes.io/projected/122c6ad7-3901-480d-bcad-fae5e86fea9d-kube-api-access-7lbp2\") pod \"122c6ad7-3901-480d-bcad-fae5e86fea9d\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121197 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-config\") pod \"122c6ad7-3901-480d-bcad-fae5e86fea9d\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121294 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-dns-svc\") pod \"122c6ad7-3901-480d-bcad-fae5e86fea9d\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-ovsdbserver-nb\") pod \"122c6ad7-3901-480d-bcad-fae5e86fea9d\" (UID: \"122c6ad7-3901-480d-bcad-fae5e86fea9d\") " Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121592 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121637 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gddp8\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-kube-api-access-gddp8\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121736 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-lock\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121860 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.121905 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-cache\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.123075 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-lock\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.123680 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-config" (OuterVolumeSpecName: "config") pod "122c6ad7-3901-480d-bcad-fae5e86fea9d" (UID: "122c6ad7-3901-480d-bcad-fae5e86fea9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.124056 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "122c6ad7-3901-480d-bcad-fae5e86fea9d" (UID: "122c6ad7-3901-480d-bcad-fae5e86fea9d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.124116 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: E1121 13:54:05.124148 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:54:05 crc kubenswrapper[4904]: E1121 13:54:05.124165 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:54:05 crc kubenswrapper[4904]: E1121 13:54:05.124373 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift podName:00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae nodeName:}" failed. No retries permitted until 2025-11-21 13:54:05.624327147 +0000 UTC m=+1319.745859699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift") pod "swift-storage-0" (UID: "00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae") : configmap "swift-ring-files" not found Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.124501 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "122c6ad7-3901-480d-bcad-fae5e86fea9d" (UID: "122c6ad7-3901-480d-bcad-fae5e86fea9d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.124601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-cache\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.131763 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122c6ad7-3901-480d-bcad-fae5e86fea9d-kube-api-access-7lbp2" (OuterVolumeSpecName: "kube-api-access-7lbp2") pod "122c6ad7-3901-480d-bcad-fae5e86fea9d" (UID: "122c6ad7-3901-480d-bcad-fae5e86fea9d"). InnerVolumeSpecName "kube-api-access-7lbp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.135487 4904 scope.go:117] "RemoveContainer" containerID="03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae" Nov 21 13:54:05 crc kubenswrapper[4904]: E1121 13:54:05.140849 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae\": container with ID starting with 03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae not found: ID does not exist" containerID="03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.140920 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae"} err="failed to get container status \"03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae\": rpc error: code = NotFound desc = could not find container \"03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae\": container with ID starting with 03209e72d907280fa1ca8dc16a4fe0c479da4552ead8f83fea8d1187e860e0ae not found: ID does not exist" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.140961 4904 scope.go:117] "RemoveContainer" containerID="61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.141964 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gddp8\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-kube-api-access-gddp8\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:05 crc kubenswrapper[4904]: E1121 13:54:05.142776 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502\": container with ID starting with 61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502 not found: ID does not exist" containerID="61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.142845 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502"} err="failed to get container status \"61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502\": rpc error: code = NotFound desc = could not find container \"61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502\": container with ID starting with 61b171452453677fc237bfc3a1b12f767f9fe0c6fef18ff03aad08e12a53e502 not found: ID does not exist" Nov 21 13:54:05 crc kubenswrapper[4904]: I1121 13:54:05.177692 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.224285 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.224317 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.224328 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/122c6ad7-3901-480d-bcad-fae5e86fea9d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.224343 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lbp2\" (UniqueName: \"kubernetes.io/projected/122c6ad7-3901-480d-bcad-fae5e86fea9d-kube-api-access-7lbp2\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.269426 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-x9zjr"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.274222 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.277510 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.278278 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.278530 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.295563 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x9zjr"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.341407 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-tp8md"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.427937 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-ring-data-devices\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.428009 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c847f685-6c92-40df-9608-675e8f21c058-etc-swift\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.428039 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-combined-ca-bundle\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.428065 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-scripts\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.428114 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-dispersionconf\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.428298 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-swiftconf\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.428519 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x449j\" (UniqueName: \"kubernetes.io/projected/c847f685-6c92-40df-9608-675e8f21c058-kube-api-access-x449j\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.534745 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c847f685-6c92-40df-9608-675e8f21c058-etc-swift\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.534802 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-combined-ca-bundle\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.534878 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-scripts\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.535008 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-dispersionconf\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.535063 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-swiftconf\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.535143 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x449j\" (UniqueName: \"kubernetes.io/projected/c847f685-6c92-40df-9608-675e8f21c058-kube-api-access-x449j\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.535268 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c847f685-6c92-40df-9608-675e8f21c058-etc-swift\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.535368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-ring-data-devices\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.536471 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-ring-data-devices\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.536476 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-scripts\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.543002 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-combined-ca-bundle\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.546355 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-swiftconf\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.546605 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-dispersionconf\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.566780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x449j\" (UniqueName: \"kubernetes.io/projected/c847f685-6c92-40df-9608-675e8f21c058-kube-api-access-x449j\") pod \"swift-ring-rebalance-x9zjr\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.622094 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.638140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:06 crc kubenswrapper[4904]: E1121 13:54:05.638441 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:54:06 crc kubenswrapper[4904]: E1121 13:54:05.638471 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:54:06 crc kubenswrapper[4904]: E1121 13:54:05.638545 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift podName:00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae nodeName:}" failed. No retries permitted until 2025-11-21 13:54:06.638519203 +0000 UTC m=+1320.760051755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift") pod "swift-storage-0" (UID: "00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae") : configmap "swift-ring-files" not found Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.922590 4904 generic.go:334] "Generic (PLEG): container finished" podID="d32dbae7-cb70-4e14-be08-0c1f178d23de" containerID="d8c1b485ac0831926a5942e7122853539a899d8d0e29462b808c38f6b72117da" exitCode=0 Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.922901 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" event={"ID":"d32dbae7-cb70-4e14-be08-0c1f178d23de","Type":"ContainerDied","Data":"d8c1b485ac0831926a5942e7122853539a899d8d0e29462b808c38f6b72117da"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.922944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" event={"ID":"d32dbae7-cb70-4e14-be08-0c1f178d23de","Type":"ContainerStarted","Data":"dff067fb7555410ebe4f7638b44703dd65feb79a6c7355d6f7af67d302e3a27b"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.931425 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"2987646d-06ff-44a7-b766-ff6ff19ed796","Type":"ContainerStarted","Data":"6ed37034bb1aace60ab0c70dd0abe82067864d6f001dc5a25372c07d414d43e0"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.939346 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"554709ef-9d18-4a19-aded-0c8fe94e30e8","Type":"ContainerStarted","Data":"9484ef7809be347b05047c9afa3aca643334bb8e0e908dcfaf1bb930b3dcbf32"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.947298 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tp8md" event={"ID":"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3","Type":"ContainerStarted","Data":"05a71f8cce9cd2dc6d7ce9f0aaedbb78d6dfe80b2ab275f79a29f7f9edff673f"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.947403 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-tp8md" event={"ID":"c8dbdbe2-57e0-435a-ab3e-4dd526d584b3","Type":"ContainerStarted","Data":"f01ddc341d229bf30816f5681e32c2764212ea4f10aebd22c1aba6b7e7001be1"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.947722 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-rqx8c" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:05.979893 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.668022348 podStartE2EDuration="56.979868923s" podCreationTimestamp="2025-11-21 13:53:09 +0000 UTC" firstStartedPulling="2025-11-21 13:53:12.536551035 +0000 UTC m=+1266.658083587" lastFinishedPulling="2025-11-21 13:53:52.84839757 +0000 UTC m=+1306.969930162" observedRunningTime="2025-11-21 13:54:05.979132554 +0000 UTC m=+1320.100665126" watchObservedRunningTime="2025-11-21 13:54:05.979868923 +0000 UTC m=+1320.101401475" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.086831 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.51607954 podStartE2EDuration="58.086808048s" podCreationTimestamp="2025-11-21 13:53:08 +0000 UTC" firstStartedPulling="2025-11-21 13:53:11.429356462 +0000 UTC m=+1265.550889014" lastFinishedPulling="2025-11-21 13:53:52.00008496 +0000 UTC m=+1306.121617522" observedRunningTime="2025-11-21 13:54:06.018056627 +0000 UTC m=+1320.139589179" watchObservedRunningTime="2025-11-21 13:54:06.086808048 +0000 UTC m=+1320.208340610" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.128952 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-rqx8c"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.146030 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-rqx8c"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.146160 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-tp8md" podStartSLOduration=2.146132009 podStartE2EDuration="2.146132009s" podCreationTimestamp="2025-11-21 13:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:06.090809386 +0000 UTC m=+1320.212341938" watchObservedRunningTime="2025-11-21 13:54:06.146132009 +0000 UTC m=+1320.267664561" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.407324 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.540833 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="122c6ad7-3901-480d-bcad-fae5e86fea9d" path="/var/lib/kubelet/pods/122c6ad7-3901-480d-bcad-fae5e86fea9d/volumes" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.541286 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4099b60-5765-4105-b20b-3308b43df36e" path="/var/lib/kubelet/pods/c4099b60-5765-4105-b20b-3308b43df36e/volumes" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.575816 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8qd6n"] Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.627361 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.665319 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-config\") pod \"d32dbae7-cb70-4e14-be08-0c1f178d23de\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.665472 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfhml\" (UniqueName: \"kubernetes.io/projected/d32dbae7-cb70-4e14-be08-0c1f178d23de-kube-api-access-wfhml\") pod \"d32dbae7-cb70-4e14-be08-0c1f178d23de\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.665632 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-dns-svc\") pod \"d32dbae7-cb70-4e14-be08-0c1f178d23de\" (UID: \"d32dbae7-cb70-4e14-be08-0c1f178d23de\") " Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.666060 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:06 crc kubenswrapper[4904]: E1121 13:54:06.667799 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:54:06 crc kubenswrapper[4904]: E1121 13:54:06.667821 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:54:06 crc kubenswrapper[4904]: E1121 13:54:06.667926 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift podName:00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae nodeName:}" failed. No retries permitted until 2025-11-21 13:54:08.66785163 +0000 UTC m=+1322.789384182 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift") pod "swift-storage-0" (UID: "00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae") : configmap "swift-ring-files" not found Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.672580 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32dbae7-cb70-4e14-be08-0c1f178d23de-kube-api-access-wfhml" (OuterVolumeSpecName: "kube-api-access-wfhml") pod "d32dbae7-cb70-4e14-be08-0c1f178d23de" (UID: "d32dbae7-cb70-4e14-be08-0c1f178d23de"). InnerVolumeSpecName "kube-api-access-wfhml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.694041 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-config" (OuterVolumeSpecName: "config") pod "d32dbae7-cb70-4e14-be08-0c1f178d23de" (UID: "d32dbae7-cb70-4e14-be08-0c1f178d23de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.698488 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d32dbae7-cb70-4e14-be08-0c1f178d23de" (UID: "d32dbae7-cb70-4e14-be08-0c1f178d23de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.713539 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-x9zjr"] Nov 21 13:54:06 crc kubenswrapper[4904]: W1121 13:54:06.728555 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc847f685_6c92_40df_9608_675e8f21c058.slice/crio-bc4d50b1dde11319cbf5c2dbcf2cbecf08b7d784ba7facb96c5db42491ec47a4 WatchSource:0}: Error finding container bc4d50b1dde11319cbf5c2dbcf2cbecf08b7d784ba7facb96c5db42491ec47a4: Status 404 returned error can't find the container with id bc4d50b1dde11319cbf5c2dbcf2cbecf08b7d784ba7facb96c5db42491ec47a4 Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.768340 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.768373 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d32dbae7-cb70-4e14-be08-0c1f178d23de-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.768383 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfhml\" (UniqueName: \"kubernetes.io/projected/d32dbae7-cb70-4e14-be08-0c1f178d23de-kube-api-access-wfhml\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.965393 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5208bbb4-5fe6-4507-9308-35202ce25115","Type":"ContainerStarted","Data":"eb3bce0639fac1e3a9f3cb2f478b7a2f41b20ebf0a0768747a95c6135d873531"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.967238 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9zjr" event={"ID":"c847f685-6c92-40df-9608-675e8f21c058","Type":"ContainerStarted","Data":"bc4d50b1dde11319cbf5c2dbcf2cbecf08b7d784ba7facb96c5db42491ec47a4"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.970694 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" event={"ID":"d32dbae7-cb70-4e14-be08-0c1f178d23de","Type":"ContainerDied","Data":"dff067fb7555410ebe4f7638b44703dd65feb79a6c7355d6f7af67d302e3a27b"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.970736 4904 scope.go:117] "RemoveContainer" containerID="d8c1b485ac0831926a5942e7122853539a899d8d0e29462b808c38f6b72117da" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.970869 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mr22l" Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.981535 4904 generic.go:334] "Generic (PLEG): container finished" podID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerID="f8ff95a905b3f2f3400f3c975d56b2199481c904baf836e7f16c5ba5503518a6" exitCode=0 Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.983319 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8qd6n" event={"ID":"6e1635ad-9fa5-4e0c-8efd-c6127723feba","Type":"ContainerDied","Data":"f8ff95a905b3f2f3400f3c975d56b2199481c904baf836e7f16c5ba5503518a6"} Nov 21 13:54:06 crc kubenswrapper[4904]: I1121 13:54:06.983359 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8qd6n" event={"ID":"6e1635ad-9fa5-4e0c-8efd-c6127723feba","Type":"ContainerStarted","Data":"4261c93f5804b6cdb1311e81321d80ab30d2b0191dcc1429fa0a0d4e41a085c4"} Nov 21 13:54:07 crc kubenswrapper[4904]: I1121 13:54:07.111534 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mr22l"] Nov 21 13:54:07 crc kubenswrapper[4904]: I1121 13:54:07.130880 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mr22l"] Nov 21 13:54:08 crc kubenswrapper[4904]: I1121 13:54:08.003829 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8qd6n" event={"ID":"6e1635ad-9fa5-4e0c-8efd-c6127723feba","Type":"ContainerStarted","Data":"e8157fcf7931f7d2bac074b8e00b0fac273552bf72ad80e87dd4b69b9f75d446"} Nov 21 13:54:08 crc kubenswrapper[4904]: I1121 13:54:08.004215 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:08 crc kubenswrapper[4904]: I1121 13:54:08.040220 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-8qd6n" podStartSLOduration=4.040159867 podStartE2EDuration="4.040159867s" podCreationTimestamp="2025-11-21 13:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:08.030970832 +0000 UTC m=+1322.152503394" watchObservedRunningTime="2025-11-21 13:54:08.040159867 +0000 UTC m=+1322.161692439" Nov 21 13:54:08 crc kubenswrapper[4904]: I1121 13:54:08.532826 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32dbae7-cb70-4e14-be08-0c1f178d23de" path="/var/lib/kubelet/pods/d32dbae7-cb70-4e14-be08-0c1f178d23de/volumes" Nov 21 13:54:08 crc kubenswrapper[4904]: I1121 13:54:08.718708 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:08 crc kubenswrapper[4904]: E1121 13:54:08.718966 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:54:08 crc kubenswrapper[4904]: E1121 13:54:08.718999 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:54:08 crc kubenswrapper[4904]: E1121 13:54:08.719069 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift podName:00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae nodeName:}" failed. No retries permitted until 2025-11-21 13:54:12.719045392 +0000 UTC m=+1326.840577944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift") pod "swift-storage-0" (UID: "00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae") : configmap "swift-ring-files" not found Nov 21 13:54:09 crc kubenswrapper[4904]: I1121 13:54:09.963490 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 21 13:54:09 crc kubenswrapper[4904]: I1121 13:54:09.963995 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 21 13:54:11 crc kubenswrapper[4904]: I1121 13:54:11.348381 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7f55c9947c-tc6x6" podUID="ace620dd-9305-41c2-a78f-f52c221ae850" containerName="console" containerID="cri-o://8174f83b6168556d4013276f711da3a023497d034035af40f015dfebc3ca83a5" gracePeriod=15 Nov 21 13:54:11 crc kubenswrapper[4904]: I1121 13:54:11.623215 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 21 13:54:11 crc kubenswrapper[4904]: I1121 13:54:11.623278 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 21 13:54:12 crc kubenswrapper[4904]: I1121 13:54:12.048317 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f55c9947c-tc6x6_ace620dd-9305-41c2-a78f-f52c221ae850/console/0.log" Nov 21 13:54:12 crc kubenswrapper[4904]: I1121 13:54:12.048406 4904 generic.go:334] "Generic (PLEG): container finished" podID="ace620dd-9305-41c2-a78f-f52c221ae850" containerID="8174f83b6168556d4013276f711da3a023497d034035af40f015dfebc3ca83a5" exitCode=2 Nov 21 13:54:12 crc kubenswrapper[4904]: I1121 13:54:12.048462 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f55c9947c-tc6x6" event={"ID":"ace620dd-9305-41c2-a78f-f52c221ae850","Type":"ContainerDied","Data":"8174f83b6168556d4013276f711da3a023497d034035af40f015dfebc3ca83a5"} Nov 21 13:54:12 crc kubenswrapper[4904]: I1121 13:54:12.820509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:12 crc kubenswrapper[4904]: E1121 13:54:12.820879 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:54:12 crc kubenswrapper[4904]: E1121 13:54:12.821219 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:54:12 crc kubenswrapper[4904]: E1121 13:54:12.821319 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift podName:00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae nodeName:}" failed. No retries permitted until 2025-11-21 13:54:20.821288882 +0000 UTC m=+1334.942821474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift") pod "swift-storage-0" (UID: "00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae") : configmap "swift-ring-files" not found Nov 21 13:54:12 crc kubenswrapper[4904]: I1121 13:54:12.915929 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 21 13:54:12 crc kubenswrapper[4904]: E1121 13:54:12.945057 4904 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.138:44350->38.102.83.138:45309: write tcp 38.102.83.138:44350->38.102.83.138:45309: write: broken pipe Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.043526 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.067343 4904 generic.go:334] "Generic (PLEG): container finished" podID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerID="9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177" exitCode=0 Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.067488 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"914527f1-8202-44fc-bbbb-4c39cf793a7b","Type":"ContainerDied","Data":"9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177"} Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.545347 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bnss2"] Nov 21 13:54:13 crc kubenswrapper[4904]: E1121 13:54:13.545910 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32dbae7-cb70-4e14-be08-0c1f178d23de" containerName="init" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.545925 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32dbae7-cb70-4e14-be08-0c1f178d23de" containerName="init" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.546177 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32dbae7-cb70-4e14-be08-0c1f178d23de" containerName="init" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.547038 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.554016 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bnss2"] Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.586285 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-b2e7-account-create-htps7"] Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.588153 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.591006 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.602466 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b2e7-account-create-htps7"] Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.639987 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqrl\" (UniqueName: \"kubernetes.io/projected/23a702c6-8a09-49ca-9e41-4d35f38c39db-kube-api-access-6mqrl\") pod \"mysqld-exporter-openstack-db-create-bnss2\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.640049 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a702c6-8a09-49ca-9e41-4d35f38c39db-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bnss2\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.742383 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-559xp\" (UniqueName: \"kubernetes.io/projected/e7d795f1-b251-48e1-a516-911f462ed052-kube-api-access-559xp\") pod \"mysqld-exporter-b2e7-account-create-htps7\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.742456 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqrl\" (UniqueName: \"kubernetes.io/projected/23a702c6-8a09-49ca-9e41-4d35f38c39db-kube-api-access-6mqrl\") pod \"mysqld-exporter-openstack-db-create-bnss2\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.742498 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a702c6-8a09-49ca-9e41-4d35f38c39db-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bnss2\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.742568 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d795f1-b251-48e1-a516-911f462ed052-operator-scripts\") pod \"mysqld-exporter-b2e7-account-create-htps7\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.743368 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a702c6-8a09-49ca-9e41-4d35f38c39db-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-bnss2\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.763480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqrl\" (UniqueName: \"kubernetes.io/projected/23a702c6-8a09-49ca-9e41-4d35f38c39db-kube-api-access-6mqrl\") pod \"mysqld-exporter-openstack-db-create-bnss2\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.846618 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-559xp\" (UniqueName: \"kubernetes.io/projected/e7d795f1-b251-48e1-a516-911f462ed052-kube-api-access-559xp\") pod \"mysqld-exporter-b2e7-account-create-htps7\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.847775 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d795f1-b251-48e1-a516-911f462ed052-operator-scripts\") pod \"mysqld-exporter-b2e7-account-create-htps7\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.848875 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d795f1-b251-48e1-a516-911f462ed052-operator-scripts\") pod \"mysqld-exporter-b2e7-account-create-htps7\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.877865 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.880643 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-559xp\" (UniqueName: \"kubernetes.io/projected/e7d795f1-b251-48e1-a516-911f462ed052-kube-api-access-559xp\") pod \"mysqld-exporter-b2e7-account-create-htps7\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:13 crc kubenswrapper[4904]: I1121 13:54:13.912570 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:14 crc kubenswrapper[4904]: I1121 13:54:14.830000 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:54:14 crc kubenswrapper[4904]: I1121 13:54:14.938392 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c5qt7"] Nov 21 13:54:14 crc kubenswrapper[4904]: I1121 13:54:14.938644 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="dnsmasq-dns" containerID="cri-o://06d6f32cb68234f453e5c3335681f1a60ae7a66908efe968e1bdc141c0fd28d6" gracePeriod=10 Nov 21 13:54:15 crc kubenswrapper[4904]: I1121 13:54:15.092827 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 21 13:54:15 crc kubenswrapper[4904]: I1121 13:54:15.105186 4904 generic.go:334] "Generic (PLEG): container finished" podID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerID="06d6f32cb68234f453e5c3335681f1a60ae7a66908efe968e1bdc141c0fd28d6" exitCode=0 Nov 21 13:54:15 crc kubenswrapper[4904]: I1121 13:54:15.105240 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" event={"ID":"59898c2d-078d-48ba-8dfa-81604e47ce24","Type":"ContainerDied","Data":"06d6f32cb68234f453e5c3335681f1a60ae7a66908efe968e1bdc141c0fd28d6"} Nov 21 13:54:15 crc kubenswrapper[4904]: I1121 13:54:15.183307 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 21 13:54:17 crc kubenswrapper[4904]: I1121 13:54:17.553779 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: connect: connection refused" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.117560 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f55c9947c-tc6x6_ace620dd-9305-41c2-a78f-f52c221ae850/console/0.log" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.118431 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.146717 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-oauth-serving-cert\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.146809 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-service-ca\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.146974 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-serving-cert\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.147308 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-trusted-ca-bundle\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.147390 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5rm\" (UniqueName: \"kubernetes.io/projected/ace620dd-9305-41c2-a78f-f52c221ae850-kube-api-access-sp5rm\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.147435 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-console-config\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.147519 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-oauth-config\") pod \"ace620dd-9305-41c2-a78f-f52c221ae850\" (UID: \"ace620dd-9305-41c2-a78f-f52c221ae850\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.148498 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.148856 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-service-ca" (OuterVolumeSpecName: "service-ca") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.155277 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.155597 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7f55c9947c-tc6x6_ace620dd-9305-41c2-a78f-f52c221ae850/console/0.log" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.155670 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f55c9947c-tc6x6" event={"ID":"ace620dd-9305-41c2-a78f-f52c221ae850","Type":"ContainerDied","Data":"657e1b3e7e5fcfcbac1e753d23fbf2b8ecbd0853f41a7b4afcdd59db2be83c3b"} Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.155721 4904 scope.go:117] "RemoveContainer" containerID="8174f83b6168556d4013276f711da3a023497d034035af40f015dfebc3ca83a5" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.155901 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f55c9947c-tc6x6" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.157228 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-console-config" (OuterVolumeSpecName: "console-config") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.164177 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.180186 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace620dd-9305-41c2-a78f-f52c221ae850-kube-api-access-sp5rm" (OuterVolumeSpecName: "kube-api-access-sp5rm") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "kube-api-access-sp5rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.187209 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ace620dd-9305-41c2-a78f-f52c221ae850" (UID: "ace620dd-9305-41c2-a78f-f52c221ae850"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249617 4904 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249680 4904 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-service-ca\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249694 4904 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249703 4904 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249712 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp5rm\" (UniqueName: \"kubernetes.io/projected/ace620dd-9305-41c2-a78f-f52c221ae850-kube-api-access-sp5rm\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249723 4904 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ace620dd-9305-41c2-a78f-f52c221ae850-console-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.249731 4904 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ace620dd-9305-41c2-a78f-f52c221ae850-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.272919 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.350766 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-dns-svc\") pod \"59898c2d-078d-48ba-8dfa-81604e47ce24\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.350973 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-config\") pod \"59898c2d-078d-48ba-8dfa-81604e47ce24\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.351084 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/59898c2d-078d-48ba-8dfa-81604e47ce24-kube-api-access-7vkg6\") pod \"59898c2d-078d-48ba-8dfa-81604e47ce24\" (UID: \"59898c2d-078d-48ba-8dfa-81604e47ce24\") " Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.366424 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59898c2d-078d-48ba-8dfa-81604e47ce24-kube-api-access-7vkg6" (OuterVolumeSpecName: "kube-api-access-7vkg6") pod "59898c2d-078d-48ba-8dfa-81604e47ce24" (UID: "59898c2d-078d-48ba-8dfa-81604e47ce24"). InnerVolumeSpecName "kube-api-access-7vkg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.453678 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vkg6\" (UniqueName: \"kubernetes.io/projected/59898c2d-078d-48ba-8dfa-81604e47ce24-kube-api-access-7vkg6\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.506818 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7f55c9947c-tc6x6"] Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.509578 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-config" (OuterVolumeSpecName: "config") pod "59898c2d-078d-48ba-8dfa-81604e47ce24" (UID: "59898c2d-078d-48ba-8dfa-81604e47ce24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.512729 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59898c2d-078d-48ba-8dfa-81604e47ce24" (UID: "59898c2d-078d-48ba-8dfa-81604e47ce24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.537884 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7f55c9947c-tc6x6"] Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.547144 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-b2e7-account-create-htps7"] Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.557697 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.557719 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59898c2d-078d-48ba-8dfa-81604e47ce24-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:18 crc kubenswrapper[4904]: I1121 13:54:18.601938 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bnss2"] Nov 21 13:54:18 crc kubenswrapper[4904]: W1121 13:54:18.602724 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23a702c6_8a09_49ca_9e41_4d35f38c39db.slice/crio-70c0a242771c1ab8cabf384d0c7b21fc00cdbc230e2633d050fdb9aebea95bab WatchSource:0}: Error finding container 70c0a242771c1ab8cabf384d0c7b21fc00cdbc230e2633d050fdb9aebea95bab: Status 404 returned error can't find the container with id 70c0a242771c1ab8cabf384d0c7b21fc00cdbc230e2633d050fdb9aebea95bab Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.167136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5208bbb4-5fe6-4507-9308-35202ce25115","Type":"ContainerStarted","Data":"fc5ec22cf4c4f0ffa6a3582c51d0a356f7cc0e0d8ed50f37be2affbeac0d979e"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.167727 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.167745 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5208bbb4-5fe6-4507-9308-35202ce25115","Type":"ContainerStarted","Data":"1f9582ab39e3ecb8c154ee13f87a41c1c527153178b9cd69ca7517a5600183b4"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.172500 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"914527f1-8202-44fc-bbbb-4c39cf793a7b","Type":"ContainerStarted","Data":"19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.175183 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.175196 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-c5qt7" event={"ID":"59898c2d-078d-48ba-8dfa-81604e47ce24","Type":"ContainerDied","Data":"a8c3765341e796da339c0bc73592ae82473d0173c074a789dfda8b636f3a1067"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.175366 4904 scope.go:117] "RemoveContainer" containerID="06d6f32cb68234f453e5c3335681f1a60ae7a66908efe968e1bdc141c0fd28d6" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.179459 4904 generic.go:334] "Generic (PLEG): container finished" podID="23a702c6-8a09-49ca-9e41-4d35f38c39db" containerID="c8daccdc7702d51206d0348d52998a120b65f15d105105652611e4619efe976f" exitCode=0 Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.179548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" event={"ID":"23a702c6-8a09-49ca-9e41-4d35f38c39db","Type":"ContainerDied","Data":"c8daccdc7702d51206d0348d52998a120b65f15d105105652611e4619efe976f"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.179594 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" event={"ID":"23a702c6-8a09-49ca-9e41-4d35f38c39db","Type":"ContainerStarted","Data":"70c0a242771c1ab8cabf384d0c7b21fc00cdbc230e2633d050fdb9aebea95bab"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.181777 4904 generic.go:334] "Generic (PLEG): container finished" podID="e7d795f1-b251-48e1-a516-911f462ed052" containerID="7d810b64082ac75247b55dfc2e769b02f46584bf89364de67343b6bb418feeb9" exitCode=0 Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.181831 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" event={"ID":"e7d795f1-b251-48e1-a516-911f462ed052","Type":"ContainerDied","Data":"7d810b64082ac75247b55dfc2e769b02f46584bf89364de67343b6bb418feeb9"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.181888 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" event={"ID":"e7d795f1-b251-48e1-a516-911f462ed052","Type":"ContainerStarted","Data":"e7031cb1af2a3dc18e8a2d954cb3572166fc2b8b30ffb15aacb5979ddb1f0122"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.185158 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerStarted","Data":"309e80dda1be2d3f0158d58a33ad56f06d8569a9860a7a84c80ac9093ab004d9"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.190441 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9zjr" event={"ID":"c847f685-6c92-40df-9608-675e8f21c058","Type":"ContainerStarted","Data":"98775b9e3f82e994858a88a1eba865e30467f54a8e86b083191719be6d44f564"} Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.203992 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.784855161 podStartE2EDuration="15.203970731s" podCreationTimestamp="2025-11-21 13:54:04 +0000 UTC" firstStartedPulling="2025-11-21 13:54:06.450702809 +0000 UTC m=+1320.572235361" lastFinishedPulling="2025-11-21 13:54:17.869818379 +0000 UTC m=+1331.991350931" observedRunningTime="2025-11-21 13:54:19.197399961 +0000 UTC m=+1333.318932533" watchObservedRunningTime="2025-11-21 13:54:19.203970731 +0000 UTC m=+1333.325503283" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.228849 4904 scope.go:117] "RemoveContainer" containerID="5e8d67443899f5b8cf6230b18defde35ec4e6af8d823213947dd2ad2c853741d" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.266798 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.974717595 podStartE2EDuration="1m12.266777568s" podCreationTimestamp="2025-11-21 13:53:07 +0000 UTC" firstStartedPulling="2025-11-21 13:53:09.620676992 +0000 UTC m=+1263.742209544" lastFinishedPulling="2025-11-21 13:53:37.912736955 +0000 UTC m=+1292.034269517" observedRunningTime="2025-11-21 13:54:19.26155177 +0000 UTC m=+1333.383084342" watchObservedRunningTime="2025-11-21 13:54:19.266777568 +0000 UTC m=+1333.388310120" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.293146 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-x9zjr" podStartSLOduration=3.055942193 podStartE2EDuration="14.293116372s" podCreationTimestamp="2025-11-21 13:54:05 +0000 UTC" firstStartedPulling="2025-11-21 13:54:06.74835596 +0000 UTC m=+1320.869888512" lastFinishedPulling="2025-11-21 13:54:17.985530119 +0000 UTC m=+1332.107062691" observedRunningTime="2025-11-21 13:54:19.286047669 +0000 UTC m=+1333.407580231" watchObservedRunningTime="2025-11-21 13:54:19.293116372 +0000 UTC m=+1333.414648944" Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.346932 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c5qt7"] Nov 21 13:54:19 crc kubenswrapper[4904]: I1121 13:54:19.361851 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-c5qt7"] Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.544258 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" path="/var/lib/kubelet/pods/59898c2d-078d-48ba-8dfa-81604e47ce24/volumes" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.546392 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ace620dd-9305-41c2-a78f-f52c221ae850" path="/var/lib/kubelet/pods/ace620dd-9305-41c2-a78f-f52c221ae850/volumes" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.723297 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.727879 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.819325 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a702c6-8a09-49ca-9e41-4d35f38c39db-operator-scripts\") pod \"23a702c6-8a09-49ca-9e41-4d35f38c39db\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.819472 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mqrl\" (UniqueName: \"kubernetes.io/projected/23a702c6-8a09-49ca-9e41-4d35f38c39db-kube-api-access-6mqrl\") pod \"23a702c6-8a09-49ca-9e41-4d35f38c39db\" (UID: \"23a702c6-8a09-49ca-9e41-4d35f38c39db\") " Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.819504 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-559xp\" (UniqueName: \"kubernetes.io/projected/e7d795f1-b251-48e1-a516-911f462ed052-kube-api-access-559xp\") pod \"e7d795f1-b251-48e1-a516-911f462ed052\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.819558 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d795f1-b251-48e1-a516-911f462ed052-operator-scripts\") pod \"e7d795f1-b251-48e1-a516-911f462ed052\" (UID: \"e7d795f1-b251-48e1-a516-911f462ed052\") " Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.820292 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7d795f1-b251-48e1-a516-911f462ed052-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7d795f1-b251-48e1-a516-911f462ed052" (UID: "e7d795f1-b251-48e1-a516-911f462ed052"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.820301 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23a702c6-8a09-49ca-9e41-4d35f38c39db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23a702c6-8a09-49ca-9e41-4d35f38c39db" (UID: "23a702c6-8a09-49ca-9e41-4d35f38c39db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.829016 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7d795f1-b251-48e1-a516-911f462ed052-kube-api-access-559xp" (OuterVolumeSpecName: "kube-api-access-559xp") pod "e7d795f1-b251-48e1-a516-911f462ed052" (UID: "e7d795f1-b251-48e1-a516-911f462ed052"). InnerVolumeSpecName "kube-api-access-559xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.831426 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a702c6-8a09-49ca-9e41-4d35f38c39db-kube-api-access-6mqrl" (OuterVolumeSpecName: "kube-api-access-6mqrl") pod "23a702c6-8a09-49ca-9e41-4d35f38c39db" (UID: "23a702c6-8a09-49ca-9e41-4d35f38c39db"). InnerVolumeSpecName "kube-api-access-6mqrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.882248 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tmc7t"] Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.882818 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace620dd-9305-41c2-a78f-f52c221ae850" containerName="console" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.882840 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace620dd-9305-41c2-a78f-f52c221ae850" containerName="console" Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.882858 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="dnsmasq-dns" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.882866 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="dnsmasq-dns" Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.882882 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a702c6-8a09-49ca-9e41-4d35f38c39db" containerName="mariadb-database-create" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.882888 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a702c6-8a09-49ca-9e41-4d35f38c39db" containerName="mariadb-database-create" Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.882904 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7d795f1-b251-48e1-a516-911f462ed052" containerName="mariadb-account-create" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.882910 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7d795f1-b251-48e1-a516-911f462ed052" containerName="mariadb-account-create" Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.882934 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="init" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.882941 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="init" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.883161 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="59898c2d-078d-48ba-8dfa-81604e47ce24" containerName="dnsmasq-dns" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.883183 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a702c6-8a09-49ca-9e41-4d35f38c39db" containerName="mariadb-database-create" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.883198 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7d795f1-b251-48e1-a516-911f462ed052" containerName="mariadb-account-create" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.883212 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace620dd-9305-41c2-a78f-f52c221ae850" containerName="console" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.884028 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.897955 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tmc7t"] Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921507 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7d1006-0238-446c-a9f8-da640c971ed2-operator-scripts\") pod \"keystone-db-create-tmc7t\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921599 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921690 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdwb\" (UniqueName: \"kubernetes.io/projected/ec7d1006-0238-446c-a9f8-da640c971ed2-kube-api-access-smdwb\") pod \"keystone-db-create-tmc7t\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.921817 4904 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.921852 4904 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921868 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23a702c6-8a09-49ca-9e41-4d35f38c39db-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921885 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mqrl\" (UniqueName: \"kubernetes.io/projected/23a702c6-8a09-49ca-9e41-4d35f38c39db-kube-api-access-6mqrl\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:20 crc kubenswrapper[4904]: E1121 13:54:20.921911 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift podName:00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae nodeName:}" failed. No retries permitted until 2025-11-21 13:54:36.921889131 +0000 UTC m=+1351.043421683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift") pod "swift-storage-0" (UID: "00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae") : configmap "swift-ring-files" not found Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921949 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-559xp\" (UniqueName: \"kubernetes.io/projected/e7d795f1-b251-48e1-a516-911f462ed052-kube-api-access-559xp\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.921967 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7d795f1-b251-48e1-a516-911f462ed052-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.981381 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cb90-account-create-jfxjg"] Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.982747 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.985571 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 21 13:54:20 crc kubenswrapper[4904]: I1121 13:54:20.993572 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cb90-account-create-jfxjg"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.022727 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125736-8015-45e0-987a-c795d3aabcc8-operator-scripts\") pod \"keystone-cb90-account-create-jfxjg\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.022944 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7d1006-0238-446c-a9f8-da640c971ed2-operator-scripts\") pod \"keystone-db-create-tmc7t\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.023087 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92j7\" (UniqueName: \"kubernetes.io/projected/5b125736-8015-45e0-987a-c795d3aabcc8-kube-api-access-h92j7\") pod \"keystone-cb90-account-create-jfxjg\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.023337 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smdwb\" (UniqueName: \"kubernetes.io/projected/ec7d1006-0238-446c-a9f8-da640c971ed2-kube-api-access-smdwb\") pod \"keystone-db-create-tmc7t\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.023958 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7d1006-0238-446c-a9f8-da640c971ed2-operator-scripts\") pod \"keystone-db-create-tmc7t\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.046316 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smdwb\" (UniqueName: \"kubernetes.io/projected/ec7d1006-0238-446c-a9f8-da640c971ed2-kube-api-access-smdwb\") pod \"keystone-db-create-tmc7t\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.125629 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125736-8015-45e0-987a-c795d3aabcc8-operator-scripts\") pod \"keystone-cb90-account-create-jfxjg\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.125741 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92j7\" (UniqueName: \"kubernetes.io/projected/5b125736-8015-45e0-987a-c795d3aabcc8-kube-api-access-h92j7\") pod \"keystone-cb90-account-create-jfxjg\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.127343 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125736-8015-45e0-987a-c795d3aabcc8-operator-scripts\") pod \"keystone-cb90-account-create-jfxjg\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.154373 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92j7\" (UniqueName: \"kubernetes.io/projected/5b125736-8015-45e0-987a-c795d3aabcc8-kube-api-access-h92j7\") pod \"keystone-cb90-account-create-jfxjg\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.219202 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.219264 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-bnss2" event={"ID":"23a702c6-8a09-49ca-9e41-4d35f38c39db","Type":"ContainerDied","Data":"70c0a242771c1ab8cabf384d0c7b21fc00cdbc230e2633d050fdb9aebea95bab"} Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.219347 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c0a242771c1ab8cabf384d0c7b21fc00cdbc230e2633d050fdb9aebea95bab" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.222220 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" event={"ID":"e7d795f1-b251-48e1-a516-911f462ed052","Type":"ContainerDied","Data":"e7031cb1af2a3dc18e8a2d954cb3572166fc2b8b30ffb15aacb5979ddb1f0122"} Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.222349 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-b2e7-account-create-htps7" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.222325 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7031cb1af2a3dc18e8a2d954cb3572166fc2b8b30ffb15aacb5979ddb1f0122" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.226399 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xt26t"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.227440 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.227927 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.295226 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f956-account-create-w8cb6"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.296935 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.300238 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.302387 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.320432 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f956-account-create-w8cb6"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.329736 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c92q\" (UniqueName: \"kubernetes.io/projected/20e9748c-0870-4dc2-bdc8-4843d15bd49f-kube-api-access-5c92q\") pod \"placement-db-create-xt26t\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.329848 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be4959da-6d83-452d-a3e8-4796466c7d2f-operator-scripts\") pod \"placement-f956-account-create-w8cb6\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.329878 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzgj\" (UniqueName: \"kubernetes.io/projected/be4959da-6d83-452d-a3e8-4796466c7d2f-kube-api-access-nbzgj\") pod \"placement-f956-account-create-w8cb6\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.329916 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e9748c-0870-4dc2-bdc8-4843d15bd49f-operator-scripts\") pod \"placement-db-create-xt26t\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.332027 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xt26t"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.432784 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e9748c-0870-4dc2-bdc8-4843d15bd49f-operator-scripts\") pod \"placement-db-create-xt26t\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.433109 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c92q\" (UniqueName: \"kubernetes.io/projected/20e9748c-0870-4dc2-bdc8-4843d15bd49f-kube-api-access-5c92q\") pod \"placement-db-create-xt26t\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.433704 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e9748c-0870-4dc2-bdc8-4843d15bd49f-operator-scripts\") pod \"placement-db-create-xt26t\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.433891 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be4959da-6d83-452d-a3e8-4796466c7d2f-operator-scripts\") pod \"placement-f956-account-create-w8cb6\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.433940 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbzgj\" (UniqueName: \"kubernetes.io/projected/be4959da-6d83-452d-a3e8-4796466c7d2f-kube-api-access-nbzgj\") pod \"placement-f956-account-create-w8cb6\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.435313 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be4959da-6d83-452d-a3e8-4796466c7d2f-operator-scripts\") pod \"placement-f956-account-create-w8cb6\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.457935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbzgj\" (UniqueName: \"kubernetes.io/projected/be4959da-6d83-452d-a3e8-4796466c7d2f-kube-api-access-nbzgj\") pod \"placement-f956-account-create-w8cb6\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.460776 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c92q\" (UniqueName: \"kubernetes.io/projected/20e9748c-0870-4dc2-bdc8-4843d15bd49f-kube-api-access-5c92q\") pod \"placement-db-create-xt26t\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.554119 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xt26t" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.621525 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.736189 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mnnlk"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.738069 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.751266 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mnnlk"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.843271 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-operator-scripts\") pod \"glance-db-create-mnnlk\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.843956 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb4c\" (UniqueName: \"kubernetes.io/projected/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-kube-api-access-pqb4c\") pod \"glance-db-create-mnnlk\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.873132 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ebc5-account-create-tx8n2"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.879354 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.888399 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.926642 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ebc5-account-create-tx8n2"] Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.950009 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-operator-scripts\") pod \"glance-db-create-mnnlk\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.950550 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c81d71ad-5a52-4774-a30e-19a718fdd6f3-operator-scripts\") pod \"glance-ebc5-account-create-tx8n2\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.950668 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5jlz\" (UniqueName: \"kubernetes.io/projected/c81d71ad-5a52-4774-a30e-19a718fdd6f3-kube-api-access-h5jlz\") pod \"glance-ebc5-account-create-tx8n2\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.950778 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqb4c\" (UniqueName: \"kubernetes.io/projected/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-kube-api-access-pqb4c\") pod \"glance-db-create-mnnlk\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.952999 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-operator-scripts\") pod \"glance-db-create-mnnlk\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:21 crc kubenswrapper[4904]: I1121 13:54:21.977833 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqb4c\" (UniqueName: \"kubernetes.io/projected/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-kube-api-access-pqb4c\") pod \"glance-db-create-mnnlk\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.022360 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tmc7t"] Nov 21 13:54:22 crc kubenswrapper[4904]: W1121 13:54:22.030185 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec7d1006_0238_446c_a9f8_da640c971ed2.slice/crio-82bceaf16238d882ac9d46986dad770516869932606f06305fda0e68f2f65b15 WatchSource:0}: Error finding container 82bceaf16238d882ac9d46986dad770516869932606f06305fda0e68f2f65b15: Status 404 returned error can't find the container with id 82bceaf16238d882ac9d46986dad770516869932606f06305fda0e68f2f65b15 Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.054570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c81d71ad-5a52-4774-a30e-19a718fdd6f3-operator-scripts\") pod \"glance-ebc5-account-create-tx8n2\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.054633 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5jlz\" (UniqueName: \"kubernetes.io/projected/c81d71ad-5a52-4774-a30e-19a718fdd6f3-kube-api-access-h5jlz\") pod \"glance-ebc5-account-create-tx8n2\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.055501 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c81d71ad-5a52-4774-a30e-19a718fdd6f3-operator-scripts\") pod \"glance-ebc5-account-create-tx8n2\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.078271 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5jlz\" (UniqueName: \"kubernetes.io/projected/c81d71ad-5a52-4774-a30e-19a718fdd6f3-kube-api-access-h5jlz\") pod \"glance-ebc5-account-create-tx8n2\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.112926 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.141465 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cb90-account-create-jfxjg"] Nov 21 13:54:22 crc kubenswrapper[4904]: W1121 13:54:22.191170 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b125736_8015_45e0_987a_c795d3aabcc8.slice/crio-4dabb0e580c5eee2fdf532a1123ed0047b8fc6c264ec96f4c7fcb4e33fdad5af WatchSource:0}: Error finding container 4dabb0e580c5eee2fdf532a1123ed0047b8fc6c264ec96f4c7fcb4e33fdad5af: Status 404 returned error can't find the container with id 4dabb0e580c5eee2fdf532a1123ed0047b8fc6c264ec96f4c7fcb4e33fdad5af Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.238885 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.269465 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tmc7t" event={"ID":"ec7d1006-0238-446c-a9f8-da640c971ed2","Type":"ContainerStarted","Data":"82bceaf16238d882ac9d46986dad770516869932606f06305fda0e68f2f65b15"} Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.274525 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cb90-account-create-jfxjg" event={"ID":"5b125736-8015-45e0-987a-c795d3aabcc8","Type":"ContainerStarted","Data":"4dabb0e580c5eee2fdf532a1123ed0047b8fc6c264ec96f4c7fcb4e33fdad5af"} Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.330350 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f956-account-create-w8cb6"] Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.341287 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xt26t"] Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.737929 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mnnlk"] Nov 21 13:54:22 crc kubenswrapper[4904]: I1121 13:54:22.948921 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ebc5-account-create-tx8n2"] Nov 21 13:54:22 crc kubenswrapper[4904]: W1121 13:54:22.993362 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc81d71ad_5a52_4774_a30e_19a718fdd6f3.slice/crio-d69a64994b9d6e739a90e5b1a06c8e638f38efb3ff7c3be1ddd6723e4de98c21 WatchSource:0}: Error finding container d69a64994b9d6e739a90e5b1a06c8e638f38efb3ff7c3be1ddd6723e4de98c21: Status 404 returned error can't find the container with id d69a64994b9d6e739a90e5b1a06c8e638f38efb3ff7c3be1ddd6723e4de98c21 Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.297297 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mnnlk" event={"ID":"e40cc279-7f5b-4dfd-9e1f-18ab4180b067","Type":"ContainerStarted","Data":"8db646191f1bc05d30dd17fe929c156c8c38bdb46013226d86c400536c3c706e"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.297717 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mnnlk" event={"ID":"e40cc279-7f5b-4dfd-9e1f-18ab4180b067","Type":"ContainerStarted","Data":"dd81256d15f94e348ccc96f799f5633394aae7aad966041f13dd40ca750a95aa"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.304520 4904 generic.go:334] "Generic (PLEG): container finished" podID="20e9748c-0870-4dc2-bdc8-4843d15bd49f" containerID="bb4e65fdd1b30a3485f4753a530c4927cdc2da17b8abc489cea3f8b962c2b70f" exitCode=0 Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.304862 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xt26t" event={"ID":"20e9748c-0870-4dc2-bdc8-4843d15bd49f","Type":"ContainerDied","Data":"bb4e65fdd1b30a3485f4753a530c4927cdc2da17b8abc489cea3f8b962c2b70f"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.304899 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xt26t" event={"ID":"20e9748c-0870-4dc2-bdc8-4843d15bd49f","Type":"ContainerStarted","Data":"433e2a93a3f3fa2f3ef6156e886d5e973fdc52e044f152fd97ca44bd171ca1b2"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.306863 4904 generic.go:334] "Generic (PLEG): container finished" podID="5b125736-8015-45e0-987a-c795d3aabcc8" containerID="494b623e68bbfc824b87c5f788cc36327839f708dca7ef150ea6088448d6a0e2" exitCode=0 Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.306916 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cb90-account-create-jfxjg" event={"ID":"5b125736-8015-45e0-987a-c795d3aabcc8","Type":"ContainerDied","Data":"494b623e68bbfc824b87c5f788cc36327839f708dca7ef150ea6088448d6a0e2"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.310116 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerStarted","Data":"d9c382fac45a772efc61624ddcc08150c3f4955473ea65c2c1c49dab6cb5e104"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.313816 4904 generic.go:334] "Generic (PLEG): container finished" podID="ec7d1006-0238-446c-a9f8-da640c971ed2" containerID="4c091526d7411c4077c21ef005ceec5fdf7eb3ff0e45e410c0bf2d6e315bee0b" exitCode=0 Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.313889 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tmc7t" event={"ID":"ec7d1006-0238-446c-a9f8-da640c971ed2","Type":"ContainerDied","Data":"4c091526d7411c4077c21ef005ceec5fdf7eb3ff0e45e410c0bf2d6e315bee0b"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.317135 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-mnnlk" podStartSLOduration=2.3171224280000002 podStartE2EDuration="2.317122428s" podCreationTimestamp="2025-11-21 13:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:23.312726631 +0000 UTC m=+1337.434259203" watchObservedRunningTime="2025-11-21 13:54:23.317122428 +0000 UTC m=+1337.438655000" Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.329208 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc5-account-create-tx8n2" event={"ID":"c81d71ad-5a52-4774-a30e-19a718fdd6f3","Type":"ContainerStarted","Data":"a49f58ece5968197c64b10a36fe0eb02bda642d6920ed2da01709784b866a965"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.329341 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc5-account-create-tx8n2" event={"ID":"c81d71ad-5a52-4774-a30e-19a718fdd6f3","Type":"ContainerStarted","Data":"d69a64994b9d6e739a90e5b1a06c8e638f38efb3ff7c3be1ddd6723e4de98c21"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.336205 4904 generic.go:334] "Generic (PLEG): container finished" podID="be4959da-6d83-452d-a3e8-4796466c7d2f" containerID="a63f4ac02624e4d791ceaaf14a9ac2c154e40870f9236382e4845c5d1ff74034" exitCode=0 Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.336273 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f956-account-create-w8cb6" event={"ID":"be4959da-6d83-452d-a3e8-4796466c7d2f","Type":"ContainerDied","Data":"a63f4ac02624e4d791ceaaf14a9ac2c154e40870f9236382e4845c5d1ff74034"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.336308 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f956-account-create-w8cb6" event={"ID":"be4959da-6d83-452d-a3e8-4796466c7d2f","Type":"ContainerStarted","Data":"c2eda21e3958e0cc0df662189600d9fe11dd9044a0d5fc154257cf6a1621aba0"} Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.379183 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ebc5-account-create-tx8n2" podStartSLOduration=2.379160345 podStartE2EDuration="2.379160345s" podCreationTimestamp="2025-11-21 13:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:23.371637342 +0000 UTC m=+1337.493169904" watchObservedRunningTime="2025-11-21 13:54:23.379160345 +0000 UTC m=+1337.500692917" Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.868716 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-54rdk"] Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.870750 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.882145 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-54rdk"] Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.982131 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-8b64-account-create-bcpmm"] Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.983783 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.986872 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 21 13:54:23 crc kubenswrapper[4904]: I1121 13:54:23.994117 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8b64-account-create-bcpmm"] Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.000367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3dd6ef-3fdf-4edb-8517-9b411146916f-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-54rdk\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.000416 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqgw\" (UniqueName: \"kubernetes.io/projected/9d3dd6ef-3fdf-4edb-8517-9b411146916f-kube-api-access-5pqgw\") pod \"mysqld-exporter-openstack-cell1-db-create-54rdk\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.102570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3dd6ef-3fdf-4edb-8517-9b411146916f-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-54rdk\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.102626 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pqgw\" (UniqueName: \"kubernetes.io/projected/9d3dd6ef-3fdf-4edb-8517-9b411146916f-kube-api-access-5pqgw\") pod \"mysqld-exporter-openstack-cell1-db-create-54rdk\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.102701 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b28w\" (UniqueName: \"kubernetes.io/projected/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-kube-api-access-7b28w\") pod \"mysqld-exporter-8b64-account-create-bcpmm\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.102741 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-operator-scripts\") pod \"mysqld-exporter-8b64-account-create-bcpmm\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.103418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3dd6ef-3fdf-4edb-8517-9b411146916f-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-54rdk\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.146192 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pqgw\" (UniqueName: \"kubernetes.io/projected/9d3dd6ef-3fdf-4edb-8517-9b411146916f-kube-api-access-5pqgw\") pod \"mysqld-exporter-openstack-cell1-db-create-54rdk\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.204741 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b28w\" (UniqueName: \"kubernetes.io/projected/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-kube-api-access-7b28w\") pod \"mysqld-exporter-8b64-account-create-bcpmm\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.204855 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-operator-scripts\") pod \"mysqld-exporter-8b64-account-create-bcpmm\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.205999 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-operator-scripts\") pod \"mysqld-exporter-8b64-account-create-bcpmm\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.240323 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.240432 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b28w\" (UniqueName: \"kubernetes.io/projected/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-kube-api-access-7b28w\") pod \"mysqld-exporter-8b64-account-create-bcpmm\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.316563 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.354335 4904 generic.go:334] "Generic (PLEG): container finished" podID="c81d71ad-5a52-4774-a30e-19a718fdd6f3" containerID="a49f58ece5968197c64b10a36fe0eb02bda642d6920ed2da01709784b866a965" exitCode=0 Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.354474 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc5-account-create-tx8n2" event={"ID":"c81d71ad-5a52-4774-a30e-19a718fdd6f3","Type":"ContainerDied","Data":"a49f58ece5968197c64b10a36fe0eb02bda642d6920ed2da01709784b866a965"} Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.363081 4904 generic.go:334] "Generic (PLEG): container finished" podID="e40cc279-7f5b-4dfd-9e1f-18ab4180b067" containerID="8db646191f1bc05d30dd17fe929c156c8c38bdb46013226d86c400536c3c706e" exitCode=0 Nov 21 13:54:24 crc kubenswrapper[4904]: I1121 13:54:24.363557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mnnlk" event={"ID":"e40cc279-7f5b-4dfd-9e1f-18ab4180b067","Type":"ContainerDied","Data":"8db646191f1bc05d30dd17fe929c156c8c38bdb46013226d86c400536c3c706e"} Nov 21 13:54:25 crc kubenswrapper[4904]: I1121 13:54:25.100366 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-54rdk"] Nov 21 13:54:25 crc kubenswrapper[4904]: I1121 13:54:25.341258 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-8b64-account-create-bcpmm"] Nov 21 13:54:26 crc kubenswrapper[4904]: W1121 13:54:26.342932 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d3dd6ef_3fdf_4edb_8517_9b411146916f.slice/crio-a5166f5ba8f8db913374293b37a239c837904618caa214cf1943a6a11e016e5c WatchSource:0}: Error finding container a5166f5ba8f8db913374293b37a239c837904618caa214cf1943a6a11e016e5c: Status 404 returned error can't find the container with id a5166f5ba8f8db913374293b37a239c837904618caa214cf1943a6a11e016e5c Nov 21 13:54:26 crc kubenswrapper[4904]: W1121 13:54:26.362638 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d06a296_4f71_4ce1_93e9_d79ca7e1ba9a.slice/crio-a12c6abefd25f6ff34c32959c902e7c9804a29c483740e2cd38fe28d7f9de3cc WatchSource:0}: Error finding container a12c6abefd25f6ff34c32959c902e7c9804a29c483740e2cd38fe28d7f9de3cc: Status 404 returned error can't find the container with id a12c6abefd25f6ff34c32959c902e7c9804a29c483740e2cd38fe28d7f9de3cc Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.390729 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc5-account-create-tx8n2" event={"ID":"c81d71ad-5a52-4774-a30e-19a718fdd6f3","Type":"ContainerDied","Data":"d69a64994b9d6e739a90e5b1a06c8e638f38efb3ff7c3be1ddd6723e4de98c21"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.390783 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69a64994b9d6e739a90e5b1a06c8e638f38efb3ff7c3be1ddd6723e4de98c21" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.402694 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f956-account-create-w8cb6" event={"ID":"be4959da-6d83-452d-a3e8-4796466c7d2f","Type":"ContainerDied","Data":"c2eda21e3958e0cc0df662189600d9fe11dd9044a0d5fc154257cf6a1621aba0"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.402731 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2eda21e3958e0cc0df662189600d9fe11dd9044a0d5fc154257cf6a1621aba0" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.407930 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mnnlk" event={"ID":"e40cc279-7f5b-4dfd-9e1f-18ab4180b067","Type":"ContainerDied","Data":"dd81256d15f94e348ccc96f799f5633394aae7aad966041f13dd40ca750a95aa"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.407958 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd81256d15f94e348ccc96f799f5633394aae7aad966041f13dd40ca750a95aa" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.409946 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xt26t" event={"ID":"20e9748c-0870-4dc2-bdc8-4843d15bd49f","Type":"ContainerDied","Data":"433e2a93a3f3fa2f3ef6156e886d5e973fdc52e044f152fd97ca44bd171ca1b2"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.409966 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="433e2a93a3f3fa2f3ef6156e886d5e973fdc52e044f152fd97ca44bd171ca1b2" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.412575 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cb90-account-create-jfxjg" event={"ID":"5b125736-8015-45e0-987a-c795d3aabcc8","Type":"ContainerDied","Data":"4dabb0e580c5eee2fdf532a1123ed0047b8fc6c264ec96f4c7fcb4e33fdad5af"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.412593 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dabb0e580c5eee2fdf532a1123ed0047b8fc6c264ec96f4c7fcb4e33fdad5af" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.415607 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" event={"ID":"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a","Type":"ContainerStarted","Data":"a12c6abefd25f6ff34c32959c902e7c9804a29c483740e2cd38fe28d7f9de3cc"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.417851 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" event={"ID":"9d3dd6ef-3fdf-4edb-8517-9b411146916f","Type":"ContainerStarted","Data":"a5166f5ba8f8db913374293b37a239c837904618caa214cf1943a6a11e016e5c"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.430018 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tmc7t" event={"ID":"ec7d1006-0238-446c-a9f8-da640c971ed2","Type":"ContainerDied","Data":"82bceaf16238d882ac9d46986dad770516869932606f06305fda0e68f2f65b15"} Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.430077 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82bceaf16238d882ac9d46986dad770516869932606f06305fda0e68f2f65b15" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.465320 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xt26t" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.470966 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.580402 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e9748c-0870-4dc2-bdc8-4843d15bd49f-operator-scripts\") pod \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.580924 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smdwb\" (UniqueName: \"kubernetes.io/projected/ec7d1006-0238-446c-a9f8-da640c971ed2-kube-api-access-smdwb\") pod \"ec7d1006-0238-446c-a9f8-da640c971ed2\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.581097 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c92q\" (UniqueName: \"kubernetes.io/projected/20e9748c-0870-4dc2-bdc8-4843d15bd49f-kube-api-access-5c92q\") pod \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\" (UID: \"20e9748c-0870-4dc2-bdc8-4843d15bd49f\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.581318 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7d1006-0238-446c-a9f8-da640c971ed2-operator-scripts\") pod \"ec7d1006-0238-446c-a9f8-da640c971ed2\" (UID: \"ec7d1006-0238-446c-a9f8-da640c971ed2\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.581568 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20e9748c-0870-4dc2-bdc8-4843d15bd49f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20e9748c-0870-4dc2-bdc8-4843d15bd49f" (UID: "20e9748c-0870-4dc2-bdc8-4843d15bd49f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.582491 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20e9748c-0870-4dc2-bdc8-4843d15bd49f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.582555 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec7d1006-0238-446c-a9f8-da640c971ed2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec7d1006-0238-446c-a9f8-da640c971ed2" (UID: "ec7d1006-0238-446c-a9f8-da640c971ed2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.589528 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e9748c-0870-4dc2-bdc8-4843d15bd49f-kube-api-access-5c92q" (OuterVolumeSpecName: "kube-api-access-5c92q") pod "20e9748c-0870-4dc2-bdc8-4843d15bd49f" (UID: "20e9748c-0870-4dc2-bdc8-4843d15bd49f"). InnerVolumeSpecName "kube-api-access-5c92q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.590359 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7d1006-0238-446c-a9f8-da640c971ed2-kube-api-access-smdwb" (OuterVolumeSpecName: "kube-api-access-smdwb") pod "ec7d1006-0238-446c-a9f8-da640c971ed2" (UID: "ec7d1006-0238-446c-a9f8-da640c971ed2"). InnerVolumeSpecName "kube-api-access-smdwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.655372 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.684344 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smdwb\" (UniqueName: \"kubernetes.io/projected/ec7d1006-0238-446c-a9f8-da640c971ed2-kube-api-access-smdwb\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.684382 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c92q\" (UniqueName: \"kubernetes.io/projected/20e9748c-0870-4dc2-bdc8-4843d15bd49f-kube-api-access-5c92q\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.684393 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec7d1006-0238-446c-a9f8-da640c971ed2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.693245 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.728191 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.738927 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.785320 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqb4c\" (UniqueName: \"kubernetes.io/projected/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-kube-api-access-pqb4c\") pod \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.785840 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbzgj\" (UniqueName: \"kubernetes.io/projected/be4959da-6d83-452d-a3e8-4796466c7d2f-kube-api-access-nbzgj\") pod \"be4959da-6d83-452d-a3e8-4796466c7d2f\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.785933 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c81d71ad-5a52-4774-a30e-19a718fdd6f3-operator-scripts\") pod \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.785959 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5jlz\" (UniqueName: \"kubernetes.io/projected/c81d71ad-5a52-4774-a30e-19a718fdd6f3-kube-api-access-h5jlz\") pod \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\" (UID: \"c81d71ad-5a52-4774-a30e-19a718fdd6f3\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.786004 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be4959da-6d83-452d-a3e8-4796466c7d2f-operator-scripts\") pod \"be4959da-6d83-452d-a3e8-4796466c7d2f\" (UID: \"be4959da-6d83-452d-a3e8-4796466c7d2f\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.786183 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-operator-scripts\") pod \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\" (UID: \"e40cc279-7f5b-4dfd-9e1f-18ab4180b067\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.786832 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c81d71ad-5a52-4774-a30e-19a718fdd6f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c81d71ad-5a52-4774-a30e-19a718fdd6f3" (UID: "c81d71ad-5a52-4774-a30e-19a718fdd6f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.787638 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4959da-6d83-452d-a3e8-4796466c7d2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be4959da-6d83-452d-a3e8-4796466c7d2f" (UID: "be4959da-6d83-452d-a3e8-4796466c7d2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.787932 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e40cc279-7f5b-4dfd-9e1f-18ab4180b067" (UID: "e40cc279-7f5b-4dfd-9e1f-18ab4180b067"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.793951 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-kube-api-access-pqb4c" (OuterVolumeSpecName: "kube-api-access-pqb4c") pod "e40cc279-7f5b-4dfd-9e1f-18ab4180b067" (UID: "e40cc279-7f5b-4dfd-9e1f-18ab4180b067"). InnerVolumeSpecName "kube-api-access-pqb4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.794124 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4959da-6d83-452d-a3e8-4796466c7d2f-kube-api-access-nbzgj" (OuterVolumeSpecName: "kube-api-access-nbzgj") pod "be4959da-6d83-452d-a3e8-4796466c7d2f" (UID: "be4959da-6d83-452d-a3e8-4796466c7d2f"). InnerVolumeSpecName "kube-api-access-nbzgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.796242 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81d71ad-5a52-4774-a30e-19a718fdd6f3-kube-api-access-h5jlz" (OuterVolumeSpecName: "kube-api-access-h5jlz") pod "c81d71ad-5a52-4774-a30e-19a718fdd6f3" (UID: "c81d71ad-5a52-4774-a30e-19a718fdd6f3"). InnerVolumeSpecName "kube-api-access-h5jlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888038 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125736-8015-45e0-987a-c795d3aabcc8-operator-scripts\") pod \"5b125736-8015-45e0-987a-c795d3aabcc8\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888127 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92j7\" (UniqueName: \"kubernetes.io/projected/5b125736-8015-45e0-987a-c795d3aabcc8-kube-api-access-h92j7\") pod \"5b125736-8015-45e0-987a-c795d3aabcc8\" (UID: \"5b125736-8015-45e0-987a-c795d3aabcc8\") " Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888618 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c81d71ad-5a52-4774-a30e-19a718fdd6f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888630 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5jlz\" (UniqueName: \"kubernetes.io/projected/c81d71ad-5a52-4774-a30e-19a718fdd6f3-kube-api-access-h5jlz\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888642 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be4959da-6d83-452d-a3e8-4796466c7d2f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888668 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888677 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqb4c\" (UniqueName: \"kubernetes.io/projected/e40cc279-7f5b-4dfd-9e1f-18ab4180b067-kube-api-access-pqb4c\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888686 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbzgj\" (UniqueName: \"kubernetes.io/projected/be4959da-6d83-452d-a3e8-4796466c7d2f-kube-api-access-nbzgj\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.888956 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b125736-8015-45e0-987a-c795d3aabcc8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b125736-8015-45e0-987a-c795d3aabcc8" (UID: "5b125736-8015-45e0-987a-c795d3aabcc8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.891723 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b125736-8015-45e0-987a-c795d3aabcc8-kube-api-access-h92j7" (OuterVolumeSpecName: "kube-api-access-h92j7") pod "5b125736-8015-45e0-987a-c795d3aabcc8" (UID: "5b125736-8015-45e0-987a-c795d3aabcc8"). InnerVolumeSpecName "kube-api-access-h92j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.991340 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b125736-8015-45e0-987a-c795d3aabcc8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:26 crc kubenswrapper[4904]: I1121 13:54:26.991379 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92j7\" (UniqueName: \"kubernetes.io/projected/5b125736-8015-45e0-987a-c795d3aabcc8-kube-api-access-h92j7\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:27 crc kubenswrapper[4904]: E1121 13:54:27.016479 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf72ba976_6eb5_4886_81fc_3f7e4563039d.slice/crio-conmon-3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.446641 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d3dd6ef-3fdf-4edb-8517-9b411146916f" containerID="9e3d0ec26eef04c65c4a2cc143bdf82f908cacf19dcf47bcfbc78a5ab77dbc28" exitCode=0 Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.447219 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" event={"ID":"9d3dd6ef-3fdf-4edb-8517-9b411146916f","Type":"ContainerDied","Data":"9e3d0ec26eef04c65c4a2cc143bdf82f908cacf19dcf47bcfbc78a5ab77dbc28"} Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.461560 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerStarted","Data":"c3d8b4ebdd13ec15fc3ab7fe700536135f2d7af7604ab5b17db127e469d7b520"} Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.468790 4904 generic.go:334] "Generic (PLEG): container finished" podID="c847f685-6c92-40df-9608-675e8f21c058" containerID="98775b9e3f82e994858a88a1eba865e30467f54a8e86b083191719be6d44f564" exitCode=0 Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.469132 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9zjr" event={"ID":"c847f685-6c92-40df-9608-675e8f21c058","Type":"ContainerDied","Data":"98775b9e3f82e994858a88a1eba865e30467f54a8e86b083191719be6d44f564"} Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.473280 4904 generic.go:334] "Generic (PLEG): container finished" podID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerID="3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4" exitCode=0 Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.473350 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f72ba976-6eb5-4886-81fc-3f7e4563039d","Type":"ContainerDied","Data":"3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4"} Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.480732 4904 generic.go:334] "Generic (PLEG): container finished" podID="3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" containerID="83a0dba8f011de55212a24290fb249b9eb660b64ebe5d4363f6d6aefc2754fb1" exitCode=0 Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.480862 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mnnlk" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.482046 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xt26t" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.482855 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" event={"ID":"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a","Type":"ContainerDied","Data":"83a0dba8f011de55212a24290fb249b9eb660b64ebe5d4363f6d6aefc2754fb1"} Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.483195 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc5-account-create-tx8n2" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.483241 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cb90-account-create-jfxjg" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.484775 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tmc7t" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.484876 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f956-account-create-w8cb6" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.653933 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.888639124 podStartE2EDuration="1m14.653909386s" podCreationTimestamp="2025-11-21 13:53:13 +0000 UTC" firstStartedPulling="2025-11-21 13:53:38.696286959 +0000 UTC m=+1292.817819511" lastFinishedPulling="2025-11-21 13:54:26.461557201 +0000 UTC m=+1340.583089773" observedRunningTime="2025-11-21 13:54:27.560542832 +0000 UTC m=+1341.682075394" watchObservedRunningTime="2025-11-21 13:54:27.653909386 +0000 UTC m=+1341.775441938" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.827904 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:54:27 crc kubenswrapper[4904]: I1121 13:54:27.856856 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gh8lv" podUID="48ec880a-b9f8-4b7a-9d69-98b730a07a02" containerName="ovn-controller" probeResult="failure" output=< Nov 21 13:54:27 crc kubenswrapper[4904]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 13:54:27 crc kubenswrapper[4904]: > Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.113638 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.113752 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.113818 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.114874 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcc2d7cbaa7ef87e0c42834de16075a7ebf9ca0b1a68c156ba86b82f49b3f653"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.114949 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://dcc2d7cbaa7ef87e0c42834de16075a7ebf9ca0b1a68c156ba86b82f49b3f653" gracePeriod=600 Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.496313 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="dcc2d7cbaa7ef87e0c42834de16075a7ebf9ca0b1a68c156ba86b82f49b3f653" exitCode=0 Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.496441 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"dcc2d7cbaa7ef87e0c42834de16075a7ebf9ca0b1a68c156ba86b82f49b3f653"} Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.497209 4904 scope.go:117] "RemoveContainer" containerID="d7fd2b100d2e7ad73c083b4c79b52506999f9dce6592051de1411c6354bd5da0" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.503027 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f72ba976-6eb5-4886-81fc-3f7e4563039d","Type":"ContainerStarted","Data":"58ad7f87df0c5b3de290c7d50f4cc86599032e2538a3c0da8ac6d5edb4326ef4"} Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.548119 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371954.306677 podStartE2EDuration="1m22.548099047s" podCreationTimestamp="2025-11-21 13:53:06 +0000 UTC" firstStartedPulling="2025-11-21 13:53:09.47649072 +0000 UTC m=+1263.598023272" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:28.539077537 +0000 UTC m=+1342.660610099" watchObservedRunningTime="2025-11-21 13:54:28.548099047 +0000 UTC m=+1342.669631599" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.661561 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.739042 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:54:28 crc kubenswrapper[4904]: I1121 13:54:28.749201 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.016990 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071482 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c847f685-6c92-40df-9608-675e8f21c058-etc-swift\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071544 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-combined-ca-bundle\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071598 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-dispersionconf\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071697 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-scripts\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071819 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-swiftconf\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071850 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x449j\" (UniqueName: \"kubernetes.io/projected/c847f685-6c92-40df-9608-675e8f21c058-kube-api-access-x449j\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.071953 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-ring-data-devices\") pod \"c847f685-6c92-40df-9608-675e8f21c058\" (UID: \"c847f685-6c92-40df-9608-675e8f21c058\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.073079 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.073792 4904 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.074150 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c847f685-6c92-40df-9608-675e8f21c058-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.090597 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c847f685-6c92-40df-9608-675e8f21c058-kube-api-access-x449j" (OuterVolumeSpecName: "kube-api-access-x449j") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "kube-api-access-x449j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.108170 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.125906 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.134209 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.147965 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-scripts" (OuterVolumeSpecName: "scripts") pod "c847f685-6c92-40df-9608-675e8f21c058" (UID: "c847f685-6c92-40df-9608-675e8f21c058"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.151055 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.161840 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.176486 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b28w\" (UniqueName: \"kubernetes.io/projected/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-kube-api-access-7b28w\") pod \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.176604 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-operator-scripts\") pod \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\" (UID: \"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177112 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c847f685-6c92-40df-9608-675e8f21c058-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177129 4904 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177139 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x449j\" (UniqueName: \"kubernetes.io/projected/c847f685-6c92-40df-9608-675e8f21c058-kube-api-access-x449j\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177151 4904 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c847f685-6c92-40df-9608-675e8f21c058-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177159 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177168 4904 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c847f685-6c92-40df-9608-675e8f21c058-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.177320 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" (UID: "3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.186593 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-kube-api-access-7b28w" (OuterVolumeSpecName: "kube-api-access-7b28w") pod "3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" (UID: "3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a"). InnerVolumeSpecName "kube-api-access-7b28w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.280024 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pqgw\" (UniqueName: \"kubernetes.io/projected/9d3dd6ef-3fdf-4edb-8517-9b411146916f-kube-api-access-5pqgw\") pod \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.280319 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3dd6ef-3fdf-4edb-8517-9b411146916f-operator-scripts\") pod \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\" (UID: \"9d3dd6ef-3fdf-4edb-8517-9b411146916f\") " Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.282701 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b28w\" (UniqueName: \"kubernetes.io/projected/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-kube-api-access-7b28w\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.282744 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.282717 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3dd6ef-3fdf-4edb-8517-9b411146916f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d3dd6ef-3fdf-4edb-8517-9b411146916f" (UID: "9d3dd6ef-3fdf-4edb-8517-9b411146916f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.284763 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3dd6ef-3fdf-4edb-8517-9b411146916f-kube-api-access-5pqgw" (OuterVolumeSpecName: "kube-api-access-5pqgw") pod "9d3dd6ef-3fdf-4edb-8517-9b411146916f" (UID: "9d3dd6ef-3fdf-4edb-8517-9b411146916f"). InnerVolumeSpecName "kube-api-access-5pqgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.384604 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pqgw\" (UniqueName: \"kubernetes.io/projected/9d3dd6ef-3fdf-4edb-8517-9b411146916f-kube-api-access-5pqgw\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.384672 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3dd6ef-3fdf-4edb-8517-9b411146916f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.519830 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991"} Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.523680 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" event={"ID":"3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a","Type":"ContainerDied","Data":"a12c6abefd25f6ff34c32959c902e7c9804a29c483740e2cd38fe28d7f9de3cc"} Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.523734 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12c6abefd25f6ff34c32959c902e7c9804a29c483740e2cd38fe28d7f9de3cc" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.523768 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-8b64-account-create-bcpmm" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.526147 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" event={"ID":"9d3dd6ef-3fdf-4edb-8517-9b411146916f","Type":"ContainerDied","Data":"a5166f5ba8f8db913374293b37a239c837904618caa214cf1943a6a11e016e5c"} Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.526244 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5166f5ba8f8db913374293b37a239c837904618caa214cf1943a6a11e016e5c" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.526400 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-54rdk" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.534391 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-x9zjr" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.534465 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-x9zjr" event={"ID":"c847f685-6c92-40df-9608-675e8f21c058","Type":"ContainerDied","Data":"bc4d50b1dde11319cbf5c2dbcf2cbecf08b7d784ba7facb96c5db42491ec47a4"} Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.534509 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4d50b1dde11319cbf5c2dbcf2cbecf08b7d784ba7facb96c5db42491ec47a4" Nov 21 13:54:29 crc kubenswrapper[4904]: I1121 13:54:29.855260 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 21 13:54:30 crc kubenswrapper[4904]: I1121 13:54:30.271416 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:30 crc kubenswrapper[4904]: I1121 13:54:30.271483 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:30 crc kubenswrapper[4904]: I1121 13:54:30.274563 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:30 crc kubenswrapper[4904]: I1121 13:54:30.544482 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.115427 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-v45q6"] Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116535 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b125736-8015-45e0-987a-c795d3aabcc8" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116550 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b125736-8015-45e0-987a-c795d3aabcc8" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116582 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d1006-0238-446c-a9f8-da640c971ed2" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116588 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d1006-0238-446c-a9f8-da640c971ed2" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116600 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116606 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116620 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c847f685-6c92-40df-9608-675e8f21c058" containerName="swift-ring-rebalance" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116626 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c847f685-6c92-40df-9608-675e8f21c058" containerName="swift-ring-rebalance" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116636 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81d71ad-5a52-4774-a30e-19a718fdd6f3" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116641 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81d71ad-5a52-4774-a30e-19a718fdd6f3" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116668 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e9748c-0870-4dc2-bdc8-4843d15bd49f" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116675 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e9748c-0870-4dc2-bdc8-4843d15bd49f" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116686 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4959da-6d83-452d-a3e8-4796466c7d2f" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116692 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4959da-6d83-452d-a3e8-4796466c7d2f" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116702 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40cc279-7f5b-4dfd-9e1f-18ab4180b067" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116708 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40cc279-7f5b-4dfd-9e1f-18ab4180b067" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: E1121 13:54:32.116717 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3dd6ef-3fdf-4edb-8517-9b411146916f" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116723 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3dd6ef-3fdf-4edb-8517-9b411146916f" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116930 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7d1006-0238-446c-a9f8-da640c971ed2" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116944 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e9748c-0870-4dc2-bdc8-4843d15bd49f" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116961 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3dd6ef-3fdf-4edb-8517-9b411146916f" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116971 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116983 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40cc279-7f5b-4dfd-9e1f-18ab4180b067" containerName="mariadb-database-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.116993 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4959da-6d83-452d-a3e8-4796466c7d2f" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.117004 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b125736-8015-45e0-987a-c795d3aabcc8" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.117013 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81d71ad-5a52-4774-a30e-19a718fdd6f3" containerName="mariadb-account-create" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.117023 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c847f685-6c92-40df-9608-675e8f21c058" containerName="swift-ring-rebalance" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.117803 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.120443 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qncwq" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.120468 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.131896 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v45q6"] Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.293413 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-combined-ca-bundle\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.293607 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-config-data\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.293742 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-db-sync-config-data\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.293816 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxgp\" (UniqueName: \"kubernetes.io/projected/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-kube-api-access-vrxgp\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.395465 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxgp\" (UniqueName: \"kubernetes.io/projected/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-kube-api-access-vrxgp\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.395978 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-combined-ca-bundle\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.396242 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-config-data\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.396424 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-db-sync-config-data\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.406127 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-db-sync-config-data\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.406788 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-combined-ca-bundle\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.407559 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-config-data\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.418525 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxgp\" (UniqueName: \"kubernetes.io/projected/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-kube-api-access-vrxgp\") pod \"glance-db-sync-v45q6\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.438428 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v45q6" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.672485 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gh8lv" podUID="48ec880a-b9f8-4b7a-9d69-98b730a07a02" containerName="ovn-controller" probeResult="failure" output=< Nov 21 13:54:32 crc kubenswrapper[4904]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 21 13:54:32 crc kubenswrapper[4904]: > Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.761254 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-jt79x" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.993855 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gh8lv-config-nbf9f"] Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.996693 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:32 crc kubenswrapper[4904]: I1121 13:54:32.999086 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.012492 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gh8lv-config-nbf9f"] Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.119846 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.119912 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run-ovn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.120003 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-scripts\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.120131 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-additional-scripts\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.120568 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-log-ovn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.120603 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvphn\" (UniqueName: \"kubernetes.io/projected/f393074f-d269-4122-a831-c39d32c455bb-kube-api-access-rvphn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.174349 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v45q6"] Nov 21 13:54:33 crc kubenswrapper[4904]: W1121 13:54:33.178760 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ccebc8d_97ab_4df0_be88_3e6147a45b7a.slice/crio-d3275abe3191e07ac8419594380227edd605727246380ebc401de628db977dd1 WatchSource:0}: Error finding container d3275abe3191e07ac8419594380227edd605727246380ebc401de628db977dd1: Status 404 returned error can't find the container with id d3275abe3191e07ac8419594380227edd605727246380ebc401de628db977dd1 Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223205 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223279 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run-ovn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223311 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-scripts\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223339 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-additional-scripts\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223436 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-log-ovn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223497 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvphn\" (UniqueName: \"kubernetes.io/projected/f393074f-d269-4122-a831-c39d32c455bb-kube-api-access-rvphn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223774 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-log-ovn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.223847 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run-ovn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.224808 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-additional-scripts\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.225499 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-scripts\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.251133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvphn\" (UniqueName: \"kubernetes.io/projected/f393074f-d269-4122-a831-c39d32c455bb-kube-api-access-rvphn\") pod \"ovn-controller-gh8lv-config-nbf9f\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.321741 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.600046 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v45q6" event={"ID":"7ccebc8d-97ab-4df0-be88-3e6147a45b7a","Type":"ContainerStarted","Data":"d3275abe3191e07ac8419594380227edd605727246380ebc401de628db977dd1"} Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.680550 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.680923 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="prometheus" containerID="cri-o://309e80dda1be2d3f0158d58a33ad56f06d8569a9860a7a84c80ac9093ab004d9" gracePeriod=600 Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.680987 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="thanos-sidecar" containerID="cri-o://c3d8b4ebdd13ec15fc3ab7fe700536135f2d7af7604ab5b17db127e469d7b520" gracePeriod=600 Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.681026 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="config-reloader" containerID="cri-o://d9c382fac45a772efc61624ddcc08150c3f4955473ea65c2c1c49dab6cb5e104" gracePeriod=600 Nov 21 13:54:33 crc kubenswrapper[4904]: I1121 13:54:33.962315 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gh8lv-config-nbf9f"] Nov 21 13:54:33 crc kubenswrapper[4904]: W1121 13:54:33.974935 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf393074f_d269_4122_a831_c39d32c455bb.slice/crio-b711bb894aa0e837ee37a2d34e7a4383a087f030a07c1479a83472f6ce4fbb69 WatchSource:0}: Error finding container b711bb894aa0e837ee37a2d34e7a4383a087f030a07c1479a83472f6ce4fbb69: Status 404 returned error can't find the container with id b711bb894aa0e837ee37a2d34e7a4383a087f030a07c1479a83472f6ce4fbb69 Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.176315 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.184898 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.187311 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.192272 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.360455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp69d\" (UniqueName: \"kubernetes.io/projected/1bebdadb-691e-41d5-8bff-00fe08591c75-kube-api-access-qp69d\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.360613 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.360711 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-config-data\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.462626 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp69d\" (UniqueName: \"kubernetes.io/projected/1bebdadb-691e-41d5-8bff-00fe08591c75-kube-api-access-qp69d\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.462801 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.462868 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-config-data\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.471879 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-config-data\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.474443 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.485876 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp69d\" (UniqueName: \"kubernetes.io/projected/1bebdadb-691e-41d5-8bff-00fe08591c75-kube-api-access-qp69d\") pod \"mysqld-exporter-0\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.562905 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.624273 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-nbf9f" event={"ID":"f393074f-d269-4122-a831-c39d32c455bb","Type":"ContainerStarted","Data":"e59eeb757a8701354e7f2f7a476236a861b9748fe72833b87c930aa3d3dae7df"} Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.624339 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-nbf9f" event={"ID":"f393074f-d269-4122-a831-c39d32c455bb","Type":"ContainerStarted","Data":"b711bb894aa0e837ee37a2d34e7a4383a087f030a07c1479a83472f6ce4fbb69"} Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.639529 4904 generic.go:334] "Generic (PLEG): container finished" podID="f0496532-016f-4090-9d82-1b500e179cd1" containerID="c3d8b4ebdd13ec15fc3ab7fe700536135f2d7af7604ab5b17db127e469d7b520" exitCode=0 Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.639574 4904 generic.go:334] "Generic (PLEG): container finished" podID="f0496532-016f-4090-9d82-1b500e179cd1" containerID="d9c382fac45a772efc61624ddcc08150c3f4955473ea65c2c1c49dab6cb5e104" exitCode=0 Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.639587 4904 generic.go:334] "Generic (PLEG): container finished" podID="f0496532-016f-4090-9d82-1b500e179cd1" containerID="309e80dda1be2d3f0158d58a33ad56f06d8569a9860a7a84c80ac9093ab004d9" exitCode=0 Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.639634 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerDied","Data":"c3d8b4ebdd13ec15fc3ab7fe700536135f2d7af7604ab5b17db127e469d7b520"} Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.639709 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerDied","Data":"d9c382fac45a772efc61624ddcc08150c3f4955473ea65c2c1c49dab6cb5e104"} Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.639730 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerDied","Data":"309e80dda1be2d3f0158d58a33ad56f06d8569a9860a7a84c80ac9093ab004d9"} Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.673476 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gh8lv-config-nbf9f" podStartSLOduration=2.6734497619999997 podStartE2EDuration="2.673449762s" podCreationTimestamp="2025-11-21 13:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:34.669684599 +0000 UTC m=+1348.791217151" watchObservedRunningTime="2025-11-21 13:54:34.673449762 +0000 UTC m=+1348.794982314" Nov 21 13:54:34 crc kubenswrapper[4904]: I1121 13:54:34.884763 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.075713 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-web-config\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.075863 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-thanos-prometheus-http-client-file\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.075943 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f0496532-016f-4090-9d82-1b500e179cd1-prometheus-metric-storage-rulefiles-0\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.075985 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-config\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.076060 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-tls-assets\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.076087 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78hns\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-kube-api-access-78hns\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.076147 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f0496532-016f-4090-9d82-1b500e179cd1-config-out\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.076385 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"f0496532-016f-4090-9d82-1b500e179cd1\" (UID: \"f0496532-016f-4090-9d82-1b500e179cd1\") " Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.076844 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0496532-016f-4090-9d82-1b500e179cd1-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.077747 4904 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f0496532-016f-4090-9d82-1b500e179cd1-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.082947 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-config" (OuterVolumeSpecName: "config") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.084419 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0496532-016f-4090-9d82-1b500e179cd1-config-out" (OuterVolumeSpecName: "config-out") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.085105 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.087030 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-kube-api-access-78hns" (OuterVolumeSpecName: "kube-api-access-78hns") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "kube-api-access-78hns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.087952 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.102418 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.122131 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-web-config" (OuterVolumeSpecName: "web-config") pod "f0496532-016f-4090-9d82-1b500e179cd1" (UID: "f0496532-016f-4090-9d82-1b500e179cd1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.178267 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179806 4904 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-web-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179852 4904 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179872 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0496532-016f-4090-9d82-1b500e179cd1-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179887 4904 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179901 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78hns\" (UniqueName: \"kubernetes.io/projected/f0496532-016f-4090-9d82-1b500e179cd1-kube-api-access-78hns\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179916 4904 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f0496532-016f-4090-9d82-1b500e179cd1-config-out\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.179951 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") on node \"crc\" " Nov 21 13:54:35 crc kubenswrapper[4904]: W1121 13:54:35.198282 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bebdadb_691e_41d5_8bff_00fe08591c75.slice/crio-35492a826c3ab44fab48396522b6d29c4bee852785a9521cd21ca1dea2fd3fa3 WatchSource:0}: Error finding container 35492a826c3ab44fab48396522b6d29c4bee852785a9521cd21ca1dea2fd3fa3: Status 404 returned error can't find the container with id 35492a826c3ab44fab48396522b6d29c4bee852785a9521cd21ca1dea2fd3fa3 Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.214149 4904 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.214384 4904 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558") on node "crc" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.282237 4904 reconciler_common.go:293] "Volume detached for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.652211 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1bebdadb-691e-41d5-8bff-00fe08591c75","Type":"ContainerStarted","Data":"35492a826c3ab44fab48396522b6d29c4bee852785a9521cd21ca1dea2fd3fa3"} Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.654712 4904 generic.go:334] "Generic (PLEG): container finished" podID="f393074f-d269-4122-a831-c39d32c455bb" containerID="e59eeb757a8701354e7f2f7a476236a861b9748fe72833b87c930aa3d3dae7df" exitCode=0 Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.654845 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-nbf9f" event={"ID":"f393074f-d269-4122-a831-c39d32c455bb","Type":"ContainerDied","Data":"e59eeb757a8701354e7f2f7a476236a861b9748fe72833b87c930aa3d3dae7df"} Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.658747 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f0496532-016f-4090-9d82-1b500e179cd1","Type":"ContainerDied","Data":"2c2862ac8c2065b179daee676b662ecffc87ccb9ffaf91ff1d1c908e18d672b6"} Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.658827 4904 scope.go:117] "RemoveContainer" containerID="c3d8b4ebdd13ec15fc3ab7fe700536135f2d7af7604ab5b17db127e469d7b520" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.659109 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.687755 4904 scope.go:117] "RemoveContainer" containerID="d9c382fac45a772efc61624ddcc08150c3f4955473ea65c2c1c49dab6cb5e104" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.712694 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.723858 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.731903 4904 scope.go:117] "RemoveContainer" containerID="309e80dda1be2d3f0158d58a33ad56f06d8569a9860a7a84c80ac9093ab004d9" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745026 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:54:35 crc kubenswrapper[4904]: E1121 13:54:35.745630 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="init-config-reloader" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745668 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="init-config-reloader" Nov 21 13:54:35 crc kubenswrapper[4904]: E1121 13:54:35.745708 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="thanos-sidecar" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745715 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="thanos-sidecar" Nov 21 13:54:35 crc kubenswrapper[4904]: E1121 13:54:35.745735 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="prometheus" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745741 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="prometheus" Nov 21 13:54:35 crc kubenswrapper[4904]: E1121 13:54:35.745751 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="config-reloader" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745756 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="config-reloader" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745965 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="prometheus" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745986 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="thanos-sidecar" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.745999 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0496532-016f-4090-9d82-1b500e179cd1" containerName="config-reloader" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.748255 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.751406 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.752862 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.752899 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.753020 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.753631 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-nw542" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.756173 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.759058 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.770841 4904 scope.go:117] "RemoveContainer" containerID="765505c8365ce0ec76632eff3fe092dee7bd56917c4ea7d46b65870b6b28d8c9" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.775535 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899057 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-config\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899140 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/804be691-8422-4cf8-bfc1-47a1f3c02294-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899180 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899292 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899323 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/804be691-8422-4cf8-bfc1-47a1f3c02294-kube-api-access-7httc\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899368 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/804be691-8422-4cf8-bfc1-47a1f3c02294-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899400 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899431 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899506 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:35 crc kubenswrapper[4904]: I1121 13:54:35.899576 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/804be691-8422-4cf8-bfc1-47a1f3c02294-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001357 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-config\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001475 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/804be691-8422-4cf8-bfc1-47a1f3c02294-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001510 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001568 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001600 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/804be691-8422-4cf8-bfc1-47a1f3c02294-kube-api-access-7httc\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001648 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/804be691-8422-4cf8-bfc1-47a1f3c02294-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001700 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001730 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001754 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001798 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.001873 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/804be691-8422-4cf8-bfc1-47a1f3c02294-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.005886 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/804be691-8422-4cf8-bfc1-47a1f3c02294-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.012275 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-config\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.013086 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.015164 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.016776 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.017495 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.019153 4904 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.019193 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d7b83180c0cc9ea158a959924c2564cac05e2804c5457e8d57944755befee3f/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.022361 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/804be691-8422-4cf8-bfc1-47a1f3c02294-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.031402 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/804be691-8422-4cf8-bfc1-47a1f3c02294-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.042112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/804be691-8422-4cf8-bfc1-47a1f3c02294-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.046541 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7httc\" (UniqueName: \"kubernetes.io/projected/804be691-8422-4cf8-bfc1-47a1f3c02294-kube-api-access-7httc\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.093286 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-95fc4439-a83a-44fe-bc9d-cff77ea4f558\") pod \"prometheus-metric-storage-0\" (UID: \"804be691-8422-4cf8-bfc1-47a1f3c02294\") " pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.369738 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.527228 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0496532-016f-4090-9d82-1b500e179cd1" path="/var/lib/kubelet/pods/f0496532-016f-4090-9d82-1b500e179cd1/volumes" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.932872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:36 crc kubenswrapper[4904]: I1121 13:54:36.948949 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae-etc-swift\") pod \"swift-storage-0\" (UID: \"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae\") " pod="openstack/swift-storage-0" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.016776 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.402407 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549481 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvphn\" (UniqueName: \"kubernetes.io/projected/f393074f-d269-4122-a831-c39d32c455bb-kube-api-access-rvphn\") pod \"f393074f-d269-4122-a831-c39d32c455bb\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549631 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-additional-scripts\") pod \"f393074f-d269-4122-a831-c39d32c455bb\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549708 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-log-ovn\") pod \"f393074f-d269-4122-a831-c39d32c455bb\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549783 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run-ovn\") pod \"f393074f-d269-4122-a831-c39d32c455bb\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549834 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-scripts\") pod \"f393074f-d269-4122-a831-c39d32c455bb\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549920 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run\") pod \"f393074f-d269-4122-a831-c39d32c455bb\" (UID: \"f393074f-d269-4122-a831-c39d32c455bb\") " Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.549932 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f393074f-d269-4122-a831-c39d32c455bb" (UID: "f393074f-d269-4122-a831-c39d32c455bb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.550119 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f393074f-d269-4122-a831-c39d32c455bb" (UID: "f393074f-d269-4122-a831-c39d32c455bb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.550178 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run" (OuterVolumeSpecName: "var-run") pod "f393074f-d269-4122-a831-c39d32c455bb" (UID: "f393074f-d269-4122-a831-c39d32c455bb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.550642 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f393074f-d269-4122-a831-c39d32c455bb" (UID: "f393074f-d269-4122-a831-c39d32c455bb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.550779 4904 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.550825 4904 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.550837 4904 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f393074f-d269-4122-a831-c39d32c455bb-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.551230 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-scripts" (OuterVolumeSpecName: "scripts") pod "f393074f-d269-4122-a831-c39d32c455bb" (UID: "f393074f-d269-4122-a831-c39d32c455bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.557088 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f393074f-d269-4122-a831-c39d32c455bb-kube-api-access-rvphn" (OuterVolumeSpecName: "kube-api-access-rvphn") pod "f393074f-d269-4122-a831-c39d32c455bb" (UID: "f393074f-d269-4122-a831-c39d32c455bb"). InnerVolumeSpecName "kube-api-access-rvphn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.651982 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gh8lv" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.653719 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvphn\" (UniqueName: \"kubernetes.io/projected/f393074f-d269-4122-a831-c39d32c455bb-kube-api-access-rvphn\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.653759 4904 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.653775 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f393074f-d269-4122-a831-c39d32c455bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.705358 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-nbf9f" event={"ID":"f393074f-d269-4122-a831-c39d32c455bb","Type":"ContainerDied","Data":"b711bb894aa0e837ee37a2d34e7a4383a087f030a07c1479a83472f6ce4fbb69"} Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.705427 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b711bb894aa0e837ee37a2d34e7a4383a087f030a07c1479a83472f6ce4fbb69" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.705497 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-nbf9f" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.800474 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gh8lv-config-nbf9f"] Nov 21 13:54:37 crc kubenswrapper[4904]: W1121 13:54:37.810803 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804be691_8422_4cf8_bfc1_47a1f3c02294.slice/crio-552a22ad4c67a5fdd9d0b88af4609f755dc01ff3028f727b1ae8278c087338e5 WatchSource:0}: Error finding container 552a22ad4c67a5fdd9d0b88af4609f755dc01ff3028f727b1ae8278c087338e5: Status 404 returned error can't find the container with id 552a22ad4c67a5fdd9d0b88af4609f755dc01ff3028f727b1ae8278c087338e5 Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.814874 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gh8lv-config-nbf9f"] Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.827313 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.871415 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.889559 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gh8lv-config-th2lx"] Nov 21 13:54:37 crc kubenswrapper[4904]: E1121 13:54:37.890151 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f393074f-d269-4122-a831-c39d32c455bb" containerName="ovn-config" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.890171 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f393074f-d269-4122-a831-c39d32c455bb" containerName="ovn-config" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.890377 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f393074f-d269-4122-a831-c39d32c455bb" containerName="ovn-config" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.896612 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.897984 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gh8lv-config-th2lx"] Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.901633 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.964822 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run-ovn\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.964941 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2svt\" (UniqueName: \"kubernetes.io/projected/1c9d65bf-5d02-45be-bc7b-999f9f59149d-kube-api-access-m2svt\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.964980 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.965183 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-additional-scripts\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.965310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-scripts\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:37 crc kubenswrapper[4904]: I1121 13:54:37.965367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-log-ovn\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067218 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2svt\" (UniqueName: \"kubernetes.io/projected/1c9d65bf-5d02-45be-bc7b-999f9f59149d-kube-api-access-m2svt\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067297 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-additional-scripts\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067432 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-scripts\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067456 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-log-ovn\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067515 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run-ovn\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067825 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run-ovn\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067870 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-log-ovn\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.067920 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.068458 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-additional-scripts\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.070150 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-scripts\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.089202 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2svt\" (UniqueName: \"kubernetes.io/projected/1c9d65bf-5d02-45be-bc7b-999f9f59149d-kube-api-access-m2svt\") pod \"ovn-controller-gh8lv-config-th2lx\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.230051 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.528594 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f393074f-d269-4122-a831-c39d32c455bb" path="/var/lib/kubelet/pods/f393074f-d269-4122-a831-c39d32c455bb/volumes" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.665864 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.709323 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gh8lv-config-th2lx"] Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.727640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"490caf2bc4aef587482cb9095c621dce37dbd6f3ebd53f018a850041362e94fa"} Nov 21 13:54:38 crc kubenswrapper[4904]: I1121 13:54:38.730084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"804be691-8422-4cf8-bfc1-47a1f3c02294","Type":"ContainerStarted","Data":"552a22ad4c67a5fdd9d0b88af4609f755dc01ff3028f727b1ae8278c087338e5"} Nov 21 13:54:38 crc kubenswrapper[4904]: W1121 13:54:38.740867 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c9d65bf_5d02_45be_bc7b_999f9f59149d.slice/crio-b4d753036733e425e0c20ea942d092ce869c8ce9826079ff0beeda530a976a1c WatchSource:0}: Error finding container b4d753036733e425e0c20ea942d092ce869c8ce9826079ff0beeda530a976a1c: Status 404 returned error can't find the container with id b4d753036733e425e0c20ea942d092ce869c8ce9826079ff0beeda530a976a1c Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.248268 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-nwl7h"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.257898 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.296752 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nwl7h"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.427101 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ad7f52-daab-4bea-a519-5adf0f636168-operator-scripts\") pod \"cinder-db-create-nwl7h\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.427696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlbck\" (UniqueName: \"kubernetes.io/projected/19ad7f52-daab-4bea-a519-5adf0f636168-kube-api-access-vlbck\") pod \"cinder-db-create-nwl7h\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.442999 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9zzvb"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.444367 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.470485 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9zzvb"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.530932 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba76450-d33c-46a2-ab88-77f4390f174e-operator-scripts\") pod \"barbican-db-create-9zzvb\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.531043 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlbck\" (UniqueName: \"kubernetes.io/projected/19ad7f52-daab-4bea-a519-5adf0f636168-kube-api-access-vlbck\") pod \"cinder-db-create-nwl7h\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.531082 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8xjs\" (UniqueName: \"kubernetes.io/projected/fba76450-d33c-46a2-ab88-77f4390f174e-kube-api-access-p8xjs\") pod \"barbican-db-create-9zzvb\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.531240 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ad7f52-daab-4bea-a519-5adf0f636168-operator-scripts\") pod \"cinder-db-create-nwl7h\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.537630 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ad7f52-daab-4bea-a519-5adf0f636168-operator-scripts\") pod \"cinder-db-create-nwl7h\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.580868 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-717d-account-create-cfcmh"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.588369 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.594827 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.601142 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-717d-account-create-cfcmh"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.625529 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlbck\" (UniqueName: \"kubernetes.io/projected/19ad7f52-daab-4bea-a519-5adf0f636168-kube-api-access-vlbck\") pod \"cinder-db-create-nwl7h\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.635053 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba76450-d33c-46a2-ab88-77f4390f174e-operator-scripts\") pod \"barbican-db-create-9zzvb\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.635441 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8xjs\" (UniqueName: \"kubernetes.io/projected/fba76450-d33c-46a2-ab88-77f4390f174e-kube-api-access-p8xjs\") pod \"barbican-db-create-9zzvb\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.636957 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba76450-d33c-46a2-ab88-77f4390f174e-operator-scripts\") pod \"barbican-db-create-9zzvb\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.693131 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-zr5d7"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.738103 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-operator-scripts\") pod \"cinder-717d-account-create-cfcmh\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.738229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dswnl\" (UniqueName: \"kubernetes.io/projected/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-kube-api-access-dswnl\") pod \"cinder-717d-account-create-cfcmh\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.758208 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-zr5d7"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.758260 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ssmh2"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.759066 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.762400 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8xjs\" (UniqueName: \"kubernetes.io/projected/fba76450-d33c-46a2-ab88-77f4390f174e-kube-api-access-p8xjs\") pod \"barbican-db-create-9zzvb\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.786163 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.818441 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ssmh2"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.818619 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.843964 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.846349 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-operator-scripts\") pod \"cinder-717d-account-create-cfcmh\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.846433 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dswnl\" (UniqueName: \"kubernetes.io/projected/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-kube-api-access-dswnl\") pod \"cinder-717d-account-create-cfcmh\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.846958 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.847157 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.847722 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fbg72" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.847818 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-operator-scripts\") pod \"cinder-717d-account-create-cfcmh\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.860440 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-th2lx" event={"ID":"1c9d65bf-5d02-45be-bc7b-999f9f59149d","Type":"ContainerStarted","Data":"b4d753036733e425e0c20ea942d092ce869c8ce9826079ff0beeda530a976a1c"} Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.876077 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-13a7-account-create-qbk7f"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.878850 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.879441 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.883181 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.887547 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dswnl\" (UniqueName: \"kubernetes.io/projected/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-kube-api-access-dswnl\") pod \"cinder-717d-account-create-cfcmh\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.919625 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-13a7-account-create-qbk7f"] Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.921500 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.949012 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f71b15da-cc9c-4358-a95d-36baef7254e4-operator-scripts\") pod \"heat-db-create-zr5d7\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.949133 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-combined-ca-bundle\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.949191 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-config-data\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.949208 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkh7f\" (UniqueName: \"kubernetes.io/projected/34287b99-9180-4bba-ae2d-bbe3eda9056f-kube-api-access-bkh7f\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:39 crc kubenswrapper[4904]: I1121 13:54:39.949229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnlws\" (UniqueName: \"kubernetes.io/projected/f71b15da-cc9c-4358-a95d-36baef7254e4-kube-api-access-hnlws\") pod \"heat-db-create-zr5d7\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.018002 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-qnrm6"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.023148 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.038974 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qnrm6"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.051563 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgbd\" (UniqueName: \"kubernetes.io/projected/1d76b81c-875e-4554-9b27-1334b70ef872-kube-api-access-vkgbd\") pod \"barbican-13a7-account-create-qbk7f\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.051880 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-combined-ca-bundle\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.051962 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d76b81c-875e-4554-9b27-1334b70ef872-operator-scripts\") pod \"barbican-13a7-account-create-qbk7f\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.052070 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkh7f\" (UniqueName: \"kubernetes.io/projected/34287b99-9180-4bba-ae2d-bbe3eda9056f-kube-api-access-bkh7f\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.052140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-config-data\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.052213 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnlws\" (UniqueName: \"kubernetes.io/projected/f71b15da-cc9c-4358-a95d-36baef7254e4-kube-api-access-hnlws\") pod \"heat-db-create-zr5d7\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.052338 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f71b15da-cc9c-4358-a95d-36baef7254e4-operator-scripts\") pod \"heat-db-create-zr5d7\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.054133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f71b15da-cc9c-4358-a95d-36baef7254e4-operator-scripts\") pod \"heat-db-create-zr5d7\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.056155 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-a701-account-create-bsz8k"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.065266 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.067582 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-combined-ca-bundle\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.072372 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.097597 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkh7f\" (UniqueName: \"kubernetes.io/projected/34287b99-9180-4bba-ae2d-bbe3eda9056f-kube-api-access-bkh7f\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.101370 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-config-data\") pod \"keystone-db-sync-ssmh2\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.101885 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnlws\" (UniqueName: \"kubernetes.io/projected/f71b15da-cc9c-4358-a95d-36baef7254e4-kube-api-access-hnlws\") pod \"heat-db-create-zr5d7\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.108234 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a701-account-create-bsz8k"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.126640 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-0480-account-create-whgf4"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.128608 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.131052 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.139895 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0480-account-create-whgf4"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.158125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-operator-scripts\") pod \"heat-a701-account-create-bsz8k\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.158252 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x85q\" (UniqueName: \"kubernetes.io/projected/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-kube-api-access-5x85q\") pod \"heat-a701-account-create-bsz8k\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.158295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgbd\" (UniqueName: \"kubernetes.io/projected/1d76b81c-875e-4554-9b27-1334b70ef872-kube-api-access-vkgbd\") pod \"barbican-13a7-account-create-qbk7f\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.158333 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47rkn\" (UniqueName: \"kubernetes.io/projected/8852fc8c-175e-4cf7-854d-230dd3c32d08-kube-api-access-47rkn\") pod \"neutron-db-create-qnrm6\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.158357 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d76b81c-875e-4554-9b27-1334b70ef872-operator-scripts\") pod \"barbican-13a7-account-create-qbk7f\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.158411 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8852fc8c-175e-4cf7-854d-230dd3c32d08-operator-scripts\") pod \"neutron-db-create-qnrm6\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.159669 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d76b81c-875e-4554-9b27-1334b70ef872-operator-scripts\") pod \"barbican-13a7-account-create-qbk7f\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.162824 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.179607 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.223518 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgbd\" (UniqueName: \"kubernetes.io/projected/1d76b81c-875e-4554-9b27-1334b70ef872-kube-api-access-vkgbd\") pod \"barbican-13a7-account-create-qbk7f\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.269772 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47rkn\" (UniqueName: \"kubernetes.io/projected/8852fc8c-175e-4cf7-854d-230dd3c32d08-kube-api-access-47rkn\") pod \"neutron-db-create-qnrm6\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.274817 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnts6\" (UniqueName: \"kubernetes.io/projected/90da1076-1698-403c-9b8b-6a93b89c47cf-kube-api-access-tnts6\") pod \"neutron-0480-account-create-whgf4\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.274868 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1076-1698-403c-9b8b-6a93b89c47cf-operator-scripts\") pod \"neutron-0480-account-create-whgf4\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.275391 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8852fc8c-175e-4cf7-854d-230dd3c32d08-operator-scripts\") pod \"neutron-db-create-qnrm6\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.275509 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-operator-scripts\") pod \"heat-a701-account-create-bsz8k\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.275754 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x85q\" (UniqueName: \"kubernetes.io/projected/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-kube-api-access-5x85q\") pod \"heat-a701-account-create-bsz8k\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.276680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8852fc8c-175e-4cf7-854d-230dd3c32d08-operator-scripts\") pod \"neutron-db-create-qnrm6\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.278379 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-operator-scripts\") pod \"heat-a701-account-create-bsz8k\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.306182 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x85q\" (UniqueName: \"kubernetes.io/projected/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-kube-api-access-5x85q\") pod \"heat-a701-account-create-bsz8k\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.351871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47rkn\" (UniqueName: \"kubernetes.io/projected/8852fc8c-175e-4cf7-854d-230dd3c32d08-kube-api-access-47rkn\") pod \"neutron-db-create-qnrm6\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.369559 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.377752 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnts6\" (UniqueName: \"kubernetes.io/projected/90da1076-1698-403c-9b8b-6a93b89c47cf-kube-api-access-tnts6\") pod \"neutron-0480-account-create-whgf4\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.377792 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1076-1698-403c-9b8b-6a93b89c47cf-operator-scripts\") pod \"neutron-0480-account-create-whgf4\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.482315 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.504112 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.523253 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1076-1698-403c-9b8b-6a93b89c47cf-operator-scripts\") pod \"neutron-0480-account-create-whgf4\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.548947 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnts6\" (UniqueName: \"kubernetes.io/projected/90da1076-1698-403c-9b8b-6a93b89c47cf-kube-api-access-tnts6\") pod \"neutron-0480-account-create-whgf4\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.735916 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-nwl7h"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.826078 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.861622 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9zzvb"] Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.892619 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nwl7h" event={"ID":"19ad7f52-daab-4bea-a519-5adf0f636168","Type":"ContainerStarted","Data":"acd487d6b74da4972b275e79e3ee698a61d8787f0dabee9f13e642cc228a6455"} Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.902144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-th2lx" event={"ID":"1c9d65bf-5d02-45be-bc7b-999f9f59149d","Type":"ContainerStarted","Data":"189ffcc1823f66d47f3beea3fd56132aa9f41d3dddb41d8cf31f37c656129911"} Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.916219 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1bebdadb-691e-41d5-8bff-00fe08591c75","Type":"ContainerStarted","Data":"7c9159023caa78eea12ced90dc14587e997d300cc1d4acb1b2f582a122b354ca"} Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.927023 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gh8lv-config-th2lx" podStartSLOduration=3.927002542 podStartE2EDuration="3.927002542s" podCreationTimestamp="2025-11-21 13:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:54:40.920723949 +0000 UTC m=+1355.042256501" watchObservedRunningTime="2025-11-21 13:54:40.927002542 +0000 UTC m=+1355.048535094" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.965138 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=4.915904101 podStartE2EDuration="6.965112655s" podCreationTimestamp="2025-11-21 13:54:34 +0000 UTC" firstStartedPulling="2025-11-21 13:54:35.205596608 +0000 UTC m=+1349.327129160" lastFinishedPulling="2025-11-21 13:54:37.254805162 +0000 UTC m=+1351.376337714" observedRunningTime="2025-11-21 13:54:40.942681066 +0000 UTC m=+1355.064213618" watchObservedRunningTime="2025-11-21 13:54:40.965112655 +0000 UTC m=+1355.086645207" Nov 21 13:54:40 crc kubenswrapper[4904]: I1121 13:54:40.990895 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ssmh2"] Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.030291 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-717d-account-create-cfcmh"] Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.035217 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-zr5d7"] Nov 21 13:54:41 crc kubenswrapper[4904]: W1121 13:54:41.092721 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf71b15da_cc9c_4358_a95d_36baef7254e4.slice/crio-d458b87ad93fc38b8d4db7fbc17807a37e5caa8ba20c428b5d1a893166bb527b WatchSource:0}: Error finding container d458b87ad93fc38b8d4db7fbc17807a37e5caa8ba20c428b5d1a893166bb527b: Status 404 returned error can't find the container with id d458b87ad93fc38b8d4db7fbc17807a37e5caa8ba20c428b5d1a893166bb527b Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.309265 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-a701-account-create-bsz8k"] Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.333279 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-qnrm6"] Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.507535 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-13a7-account-create-qbk7f"] Nov 21 13:54:41 crc kubenswrapper[4904]: W1121 13:54:41.526141 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d76b81c_875e_4554_9b27_1334b70ef872.slice/crio-7d8cc85c97d9e8a54f564a8f48d0ab65b4de99387106d9e92aa2d9106d38878a WatchSource:0}: Error finding container 7d8cc85c97d9e8a54f564a8f48d0ab65b4de99387106d9e92aa2d9106d38878a: Status 404 returned error can't find the container with id 7d8cc85c97d9e8a54f564a8f48d0ab65b4de99387106d9e92aa2d9106d38878a Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.631633 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-0480-account-create-whgf4"] Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.937274 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qnrm6" event={"ID":"8852fc8c-175e-4cf7-854d-230dd3c32d08","Type":"ContainerStarted","Data":"c4fcfcfa98679fb456f26fb56dd4c5a5ec2a3ada46983d097261b2a869aa13e7"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.943563 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0480-account-create-whgf4" event={"ID":"90da1076-1698-403c-9b8b-6a93b89c47cf","Type":"ContainerStarted","Data":"3547694bfc20da05def1544b0e7e86f7c5a16ba60e8e473def3ae7b038982553"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.945463 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ssmh2" event={"ID":"34287b99-9180-4bba-ae2d-bbe3eda9056f","Type":"ContainerStarted","Data":"e5cdcfc10c6767c3b640ff0be7b0470d08828c00877915fced66249fcb0ffca9"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.947216 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-zr5d7" event={"ID":"f71b15da-cc9c-4358-a95d-36baef7254e4","Type":"ContainerStarted","Data":"d458b87ad93fc38b8d4db7fbc17807a37e5caa8ba20c428b5d1a893166bb527b"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.960636 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9zzvb" event={"ID":"fba76450-d33c-46a2-ab88-77f4390f174e","Type":"ContainerStarted","Data":"af0379522d12e8d4b4868b89e594ec2b7d27bd8044d50c6a270f28466eb66a8a"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.977970 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-13a7-account-create-qbk7f" event={"ID":"1d76b81c-875e-4554-9b27-1334b70ef872","Type":"ContainerStarted","Data":"7d8cc85c97d9e8a54f564a8f48d0ab65b4de99387106d9e92aa2d9106d38878a"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.989483 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c9d65bf-5d02-45be-bc7b-999f9f59149d" containerID="189ffcc1823f66d47f3beea3fd56132aa9f41d3dddb41d8cf31f37c656129911" exitCode=0 Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.989576 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gh8lv-config-th2lx" event={"ID":"1c9d65bf-5d02-45be-bc7b-999f9f59149d","Type":"ContainerDied","Data":"189ffcc1823f66d47f3beea3fd56132aa9f41d3dddb41d8cf31f37c656129911"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.994523 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-717d-account-create-cfcmh" event={"ID":"91f4eaf0-e81c-4a85-a91a-7ca22ced143c","Type":"ContainerStarted","Data":"9858b78d330232fb5ead24965619e7c3d1857dbad3b08aaab283bdb317ce12e5"} Nov 21 13:54:41 crc kubenswrapper[4904]: I1121 13:54:41.998565 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a701-account-create-bsz8k" event={"ID":"4c9cc95e-4ef2-44ef-a1e0-14a09425771b","Type":"ContainerStarted","Data":"bddf920a4715931bf575c8daf0f1eabcad6e6f0a849e61c51f65fc21645a01ca"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.019565 4904 generic.go:334] "Generic (PLEG): container finished" podID="1d76b81c-875e-4554-9b27-1334b70ef872" containerID="2e8d1a7ed4b163160a12b4ade8267b8a321c514cfce88e0bef996eab97222411" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.019675 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-13a7-account-create-qbk7f" event={"ID":"1d76b81c-875e-4554-9b27-1334b70ef872","Type":"ContainerDied","Data":"2e8d1a7ed4b163160a12b4ade8267b8a321c514cfce88e0bef996eab97222411"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.023389 4904 generic.go:334] "Generic (PLEG): container finished" podID="91f4eaf0-e81c-4a85-a91a-7ca22ced143c" containerID="5f376de77b9f94a9f4ef54754937f734b69fddfc3a4cbbf527161c7185e6f750" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.023469 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-717d-account-create-cfcmh" event={"ID":"91f4eaf0-e81c-4a85-a91a-7ca22ced143c","Type":"ContainerDied","Data":"5f376de77b9f94a9f4ef54754937f734b69fddfc3a4cbbf527161c7185e6f750"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.027692 4904 generic.go:334] "Generic (PLEG): container finished" podID="f71b15da-cc9c-4358-a95d-36baef7254e4" containerID="8d0e41b006c0d07dc2ff3e3f6200bf9798082a398991e9c332d83f61551ff435" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.027738 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-zr5d7" event={"ID":"f71b15da-cc9c-4358-a95d-36baef7254e4","Type":"ContainerDied","Data":"8d0e41b006c0d07dc2ff3e3f6200bf9798082a398991e9c332d83f61551ff435"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.037477 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"804be691-8422-4cf8-bfc1-47a1f3c02294","Type":"ContainerStarted","Data":"b6ef98e0c33130470791b3d0eb1dc3e29be41652b10f9decce44d60c0383c67b"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.050032 4904 generic.go:334] "Generic (PLEG): container finished" podID="19ad7f52-daab-4bea-a519-5adf0f636168" containerID="0887d28311b1d16d6df90fe17f093f78237f8638ae47f50c888b030f9f593206" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.050094 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nwl7h" event={"ID":"19ad7f52-daab-4bea-a519-5adf0f636168","Type":"ContainerDied","Data":"0887d28311b1d16d6df90fe17f093f78237f8638ae47f50c888b030f9f593206"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.074073 4904 generic.go:334] "Generic (PLEG): container finished" podID="fba76450-d33c-46a2-ab88-77f4390f174e" containerID="d30a07e889e5a294f3fcbd6e6669fb24d224e91d1288c491b8eb1a063a595977" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.074371 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9zzvb" event={"ID":"fba76450-d33c-46a2-ab88-77f4390f174e","Type":"ContainerDied","Data":"d30a07e889e5a294f3fcbd6e6669fb24d224e91d1288c491b8eb1a063a595977"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.084152 4904 generic.go:334] "Generic (PLEG): container finished" podID="90da1076-1698-403c-9b8b-6a93b89c47cf" containerID="397caf680fc273b5e9e5bbcd53f1b327ab961116f73bdf8007fb1a10138ff398" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.084246 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0480-account-create-whgf4" event={"ID":"90da1076-1698-403c-9b8b-6a93b89c47cf","Type":"ContainerDied","Data":"397caf680fc273b5e9e5bbcd53f1b327ab961116f73bdf8007fb1a10138ff398"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.102870 4904 generic.go:334] "Generic (PLEG): container finished" podID="4c9cc95e-4ef2-44ef-a1e0-14a09425771b" containerID="0533f6d274ee3b6625dd22dc9148a53e2d1e773440d50b82002581f1f0d2ce51" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.103069 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a701-account-create-bsz8k" event={"ID":"4c9cc95e-4ef2-44ef-a1e0-14a09425771b","Type":"ContainerDied","Data":"0533f6d274ee3b6625dd22dc9148a53e2d1e773440d50b82002581f1f0d2ce51"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.112974 4904 generic.go:334] "Generic (PLEG): container finished" podID="8852fc8c-175e-4cf7-854d-230dd3c32d08" containerID="14d5b92e7f3ee6d42f3235f09f83911e3a1c3d5911527c6984e827a8e19bc0bb" exitCode=0 Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.113134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qnrm6" event={"ID":"8852fc8c-175e-4cf7-854d-230dd3c32d08","Type":"ContainerDied","Data":"14d5b92e7f3ee6d42f3235f09f83911e3a1c3d5911527c6984e827a8e19bc0bb"} Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.638378 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.814884 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2svt\" (UniqueName: \"kubernetes.io/projected/1c9d65bf-5d02-45be-bc7b-999f9f59149d-kube-api-access-m2svt\") pod \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.814972 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-log-ovn\") pod \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.815007 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-additional-scripts\") pod \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.815345 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run-ovn\") pod \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.815467 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-scripts\") pod \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.815677 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run\") pod \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\" (UID: \"1c9d65bf-5d02-45be-bc7b-999f9f59149d\") " Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.817463 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "1c9d65bf-5d02-45be-bc7b-999f9f59149d" (UID: "1c9d65bf-5d02-45be-bc7b-999f9f59149d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.817558 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "1c9d65bf-5d02-45be-bc7b-999f9f59149d" (UID: "1c9d65bf-5d02-45be-bc7b-999f9f59149d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.818358 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "1c9d65bf-5d02-45be-bc7b-999f9f59149d" (UID: "1c9d65bf-5d02-45be-bc7b-999f9f59149d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.818650 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-scripts" (OuterVolumeSpecName: "scripts") pod "1c9d65bf-5d02-45be-bc7b-999f9f59149d" (UID: "1c9d65bf-5d02-45be-bc7b-999f9f59149d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.819389 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run" (OuterVolumeSpecName: "var-run") pod "1c9d65bf-5d02-45be-bc7b-999f9f59149d" (UID: "1c9d65bf-5d02-45be-bc7b-999f9f59149d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.820313 4904 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.820367 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.820381 4904 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-run\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.820393 4904 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/1c9d65bf-5d02-45be-bc7b-999f9f59149d-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.820409 4904 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/1c9d65bf-5d02-45be-bc7b-999f9f59149d-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.829370 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c9d65bf-5d02-45be-bc7b-999f9f59149d-kube-api-access-m2svt" (OuterVolumeSpecName: "kube-api-access-m2svt") pod "1c9d65bf-5d02-45be-bc7b-999f9f59149d" (UID: "1c9d65bf-5d02-45be-bc7b-999f9f59149d"). InnerVolumeSpecName "kube-api-access-m2svt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:43 crc kubenswrapper[4904]: I1121 13:54:43.922792 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2svt\" (UniqueName: \"kubernetes.io/projected/1c9d65bf-5d02-45be-bc7b-999f9f59149d-kube-api-access-m2svt\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.011891 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gh8lv-config-th2lx"] Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.025438 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gh8lv-config-th2lx"] Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.129001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"359eabda6527444cce724a0ac6e486c1362cb062c2a707fd6722da0c322cb077"} Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.129054 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"885a975d357b5c799ce4c3b0e7acb444baefb993059dcbccdf2de090f6c689b1"} Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.129068 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"b29dae299c3c51a99d11afa8e56a35d0415e37169140cfbc01a3c4a61751673f"} Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.131830 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gh8lv-config-th2lx" Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.131902 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4d753036733e425e0c20ea942d092ce869c8ce9826079ff0beeda530a976a1c" Nov 21 13:54:44 crc kubenswrapper[4904]: I1121 13:54:44.530511 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c9d65bf-5d02-45be-bc7b-999f9f59149d" path="/var/lib/kubelet/pods/1c9d65bf-5d02-45be-bc7b-999f9f59149d/volumes" Nov 21 13:54:50 crc kubenswrapper[4904]: I1121 13:54:50.205927 4904 generic.go:334] "Generic (PLEG): container finished" podID="804be691-8422-4cf8-bfc1-47a1f3c02294" containerID="b6ef98e0c33130470791b3d0eb1dc3e29be41652b10f9decce44d60c0383c67b" exitCode=0 Nov 21 13:54:50 crc kubenswrapper[4904]: I1121 13:54:50.206015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"804be691-8422-4cf8-bfc1-47a1f3c02294","Type":"ContainerDied","Data":"b6ef98e0c33130470791b3d0eb1dc3e29be41652b10f9decce44d60c0383c67b"} Nov 21 13:54:55 crc kubenswrapper[4904]: E1121 13:54:55.837377 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 21 13:54:55 crc kubenswrapper[4904]: E1121 13:54:55.838315 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkh7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-ssmh2_openstack(34287b99-9180-4bba-ae2d-bbe3eda9056f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:54:55 crc kubenswrapper[4904]: E1121 13:54:55.839773 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-ssmh2" podUID="34287b99-9180-4bba-ae2d-bbe3eda9056f" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.073919 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.113520 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.132791 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.140193 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47rkn\" (UniqueName: \"kubernetes.io/projected/8852fc8c-175e-4cf7-854d-230dd3c32d08-kube-api-access-47rkn\") pod \"8852fc8c-175e-4cf7-854d-230dd3c32d08\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.140445 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8852fc8c-175e-4cf7-854d-230dd3c32d08-operator-scripts\") pod \"8852fc8c-175e-4cf7-854d-230dd3c32d08\" (UID: \"8852fc8c-175e-4cf7-854d-230dd3c32d08\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.141566 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8852fc8c-175e-4cf7-854d-230dd3c32d08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8852fc8c-175e-4cf7-854d-230dd3c32d08" (UID: "8852fc8c-175e-4cf7-854d-230dd3c32d08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.146400 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8852fc8c-175e-4cf7-854d-230dd3c32d08-kube-api-access-47rkn" (OuterVolumeSpecName: "kube-api-access-47rkn") pod "8852fc8c-175e-4cf7-854d-230dd3c32d08" (UID: "8852fc8c-175e-4cf7-854d-230dd3c32d08"). InnerVolumeSpecName "kube-api-access-47rkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.226538 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.239184 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.242632 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x85q\" (UniqueName: \"kubernetes.io/projected/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-kube-api-access-5x85q\") pod \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.246017 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-operator-scripts\") pod \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\" (UID: \"4c9cc95e-4ef2-44ef-a1e0-14a09425771b\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.246663 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ad7f52-daab-4bea-a519-5adf0f636168-operator-scripts\") pod \"19ad7f52-daab-4bea-a519-5adf0f636168\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.246857 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlbck\" (UniqueName: \"kubernetes.io/projected/19ad7f52-daab-4bea-a519-5adf0f636168-kube-api-access-vlbck\") pod \"19ad7f52-daab-4bea-a519-5adf0f636168\" (UID: \"19ad7f52-daab-4bea-a519-5adf0f636168\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.247495 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47rkn\" (UniqueName: \"kubernetes.io/projected/8852fc8c-175e-4cf7-854d-230dd3c32d08-kube-api-access-47rkn\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.247572 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8852fc8c-175e-4cf7-854d-230dd3c32d08-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.248382 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c9cc95e-4ef2-44ef-a1e0-14a09425771b" (UID: "4c9cc95e-4ef2-44ef-a1e0-14a09425771b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.254011 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ad7f52-daab-4bea-a519-5adf0f636168-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19ad7f52-daab-4bea-a519-5adf0f636168" (UID: "19ad7f52-daab-4bea-a519-5adf0f636168"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.255326 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-kube-api-access-5x85q" (OuterVolumeSpecName: "kube-api-access-5x85q") pod "4c9cc95e-4ef2-44ef-a1e0-14a09425771b" (UID: "4c9cc95e-4ef2-44ef-a1e0-14a09425771b"). InnerVolumeSpecName "kube-api-access-5x85q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.261145 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ad7f52-daab-4bea-a519-5adf0f636168-kube-api-access-vlbck" (OuterVolumeSpecName: "kube-api-access-vlbck") pod "19ad7f52-daab-4bea-a519-5adf0f636168" (UID: "19ad7f52-daab-4bea-a519-5adf0f636168"). InnerVolumeSpecName "kube-api-access-vlbck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.281870 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.290432 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.290587 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.290770 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-13a7-account-create-qbk7f" event={"ID":"1d76b81c-875e-4554-9b27-1334b70ef872","Type":"ContainerDied","Data":"7d8cc85c97d9e8a54f564a8f48d0ab65b4de99387106d9e92aa2d9106d38878a"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.290803 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d8cc85c97d9e8a54f564a8f48d0ab65b4de99387106d9e92aa2d9106d38878a" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.305557 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-717d-account-create-cfcmh" event={"ID":"91f4eaf0-e81c-4a85-a91a-7ca22ced143c","Type":"ContainerDied","Data":"9858b78d330232fb5ead24965619e7c3d1857dbad3b08aaab283bdb317ce12e5"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.305626 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9858b78d330232fb5ead24965619e7c3d1857dbad3b08aaab283bdb317ce12e5" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.305779 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-717d-account-create-cfcmh" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.309080 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-a701-account-create-bsz8k" event={"ID":"4c9cc95e-4ef2-44ef-a1e0-14a09425771b","Type":"ContainerDied","Data":"bddf920a4715931bf575c8daf0f1eabcad6e6f0a849e61c51f65fc21645a01ca"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.309122 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bddf920a4715931bf575c8daf0f1eabcad6e6f0a849e61c51f65fc21645a01ca" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.309179 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-a701-account-create-bsz8k" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.327034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-zr5d7" event={"ID":"f71b15da-cc9c-4358-a95d-36baef7254e4","Type":"ContainerDied","Data":"d458b87ad93fc38b8d4db7fbc17807a37e5caa8ba20c428b5d1a893166bb527b"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.327093 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d458b87ad93fc38b8d4db7fbc17807a37e5caa8ba20c428b5d1a893166bb527b" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.327191 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-zr5d7" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.336227 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-qnrm6" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.336225 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-qnrm6" event={"ID":"8852fc8c-175e-4cf7-854d-230dd3c32d08","Type":"ContainerDied","Data":"c4fcfcfa98679fb456f26fb56dd4c5a5ec2a3ada46983d097261b2a869aa13e7"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.336309 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4fcfcfa98679fb456f26fb56dd4c5a5ec2a3ada46983d097261b2a869aa13e7" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.340700 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9zzvb" event={"ID":"fba76450-d33c-46a2-ab88-77f4390f174e","Type":"ContainerDied","Data":"af0379522d12e8d4b4868b89e594ec2b7d27bd8044d50c6a270f28466eb66a8a"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.340751 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af0379522d12e8d4b4868b89e594ec2b7d27bd8044d50c6a270f28466eb66a8a" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.340832 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9zzvb" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.344609 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-nwl7h" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.344774 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-nwl7h" event={"ID":"19ad7f52-daab-4bea-a519-5adf0f636168","Type":"ContainerDied","Data":"acd487d6b74da4972b275e79e3ee698a61d8787f0dabee9f13e642cc228a6455"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.344808 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd487d6b74da4972b275e79e3ee698a61d8787f0dabee9f13e642cc228a6455" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.348597 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dswnl\" (UniqueName: \"kubernetes.io/projected/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-kube-api-access-dswnl\") pod \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.348739 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1076-1698-403c-9b8b-6a93b89c47cf-operator-scripts\") pod \"90da1076-1698-403c-9b8b-6a93b89c47cf\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.348849 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnts6\" (UniqueName: \"kubernetes.io/projected/90da1076-1698-403c-9b8b-6a93b89c47cf-kube-api-access-tnts6\") pod \"90da1076-1698-403c-9b8b-6a93b89c47cf\" (UID: \"90da1076-1698-403c-9b8b-6a93b89c47cf\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.348900 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-operator-scripts\") pod \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\" (UID: \"91f4eaf0-e81c-4a85-a91a-7ca22ced143c\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.350700 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90da1076-1698-403c-9b8b-6a93b89c47cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90da1076-1698-403c-9b8b-6a93b89c47cf" (UID: "90da1076-1698-403c-9b8b-6a93b89c47cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351220 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91f4eaf0-e81c-4a85-a91a-7ca22ced143c" (UID: "91f4eaf0-e81c-4a85-a91a-7ca22ced143c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351419 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x85q\" (UniqueName: \"kubernetes.io/projected/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-kube-api-access-5x85q\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351442 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1076-1698-403c-9b8b-6a93b89c47cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351456 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9cc95e-4ef2-44ef-a1e0-14a09425771b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351465 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351478 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ad7f52-daab-4bea-a519-5adf0f636168-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.351490 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlbck\" (UniqueName: \"kubernetes.io/projected/19ad7f52-daab-4bea-a519-5adf0f636168-kube-api-access-vlbck\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.353858 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-kube-api-access-dswnl" (OuterVolumeSpecName: "kube-api-access-dswnl") pod "91f4eaf0-e81c-4a85-a91a-7ca22ced143c" (UID: "91f4eaf0-e81c-4a85-a91a-7ca22ced143c"). InnerVolumeSpecName "kube-api-access-dswnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.357979 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90da1076-1698-403c-9b8b-6a93b89c47cf-kube-api-access-tnts6" (OuterVolumeSpecName: "kube-api-access-tnts6") pod "90da1076-1698-403c-9b8b-6a93b89c47cf" (UID: "90da1076-1698-403c-9b8b-6a93b89c47cf"). InnerVolumeSpecName "kube-api-access-tnts6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.358934 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-0480-account-create-whgf4" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.360159 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-0480-account-create-whgf4" event={"ID":"90da1076-1698-403c-9b8b-6a93b89c47cf","Type":"ContainerDied","Data":"3547694bfc20da05def1544b0e7e86f7c5a16ba60e8e473def3ae7b038982553"} Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.360215 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3547694bfc20da05def1544b0e7e86f7c5a16ba60e8e473def3ae7b038982553" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.376832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"804be691-8422-4cf8-bfc1-47a1f3c02294","Type":"ContainerStarted","Data":"db1e39a69ae0ba4d13fb46117bad39fb047d53049d35a428cbd63f3f7eec6e4b"} Nov 21 13:54:56 crc kubenswrapper[4904]: E1121 13:54:56.381152 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-ssmh2" podUID="34287b99-9180-4bba-ae2d-bbe3eda9056f" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.454931 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnlws\" (UniqueName: \"kubernetes.io/projected/f71b15da-cc9c-4358-a95d-36baef7254e4-kube-api-access-hnlws\") pod \"f71b15da-cc9c-4358-a95d-36baef7254e4\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455145 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d76b81c-875e-4554-9b27-1334b70ef872-operator-scripts\") pod \"1d76b81c-875e-4554-9b27-1334b70ef872\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455176 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8xjs\" (UniqueName: \"kubernetes.io/projected/fba76450-d33c-46a2-ab88-77f4390f174e-kube-api-access-p8xjs\") pod \"fba76450-d33c-46a2-ab88-77f4390f174e\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455264 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkgbd\" (UniqueName: \"kubernetes.io/projected/1d76b81c-875e-4554-9b27-1334b70ef872-kube-api-access-vkgbd\") pod \"1d76b81c-875e-4554-9b27-1334b70ef872\" (UID: \"1d76b81c-875e-4554-9b27-1334b70ef872\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455356 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba76450-d33c-46a2-ab88-77f4390f174e-operator-scripts\") pod \"fba76450-d33c-46a2-ab88-77f4390f174e\" (UID: \"fba76450-d33c-46a2-ab88-77f4390f174e\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455404 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f71b15da-cc9c-4358-a95d-36baef7254e4-operator-scripts\") pod \"f71b15da-cc9c-4358-a95d-36baef7254e4\" (UID: \"f71b15da-cc9c-4358-a95d-36baef7254e4\") " Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455923 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dswnl\" (UniqueName: \"kubernetes.io/projected/91f4eaf0-e81c-4a85-a91a-7ca22ced143c-kube-api-access-dswnl\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.455943 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnts6\" (UniqueName: \"kubernetes.io/projected/90da1076-1698-403c-9b8b-6a93b89c47cf-kube-api-access-tnts6\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.460575 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f71b15da-cc9c-4358-a95d-36baef7254e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f71b15da-cc9c-4358-a95d-36baef7254e4" (UID: "f71b15da-cc9c-4358-a95d-36baef7254e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.460609 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d76b81c-875e-4554-9b27-1334b70ef872-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d76b81c-875e-4554-9b27-1334b70ef872" (UID: "1d76b81c-875e-4554-9b27-1334b70ef872"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.460638 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fba76450-d33c-46a2-ab88-77f4390f174e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fba76450-d33c-46a2-ab88-77f4390f174e" (UID: "fba76450-d33c-46a2-ab88-77f4390f174e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.465981 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba76450-d33c-46a2-ab88-77f4390f174e-kube-api-access-p8xjs" (OuterVolumeSpecName: "kube-api-access-p8xjs") pod "fba76450-d33c-46a2-ab88-77f4390f174e" (UID: "fba76450-d33c-46a2-ab88-77f4390f174e"). InnerVolumeSpecName "kube-api-access-p8xjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.467836 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d76b81c-875e-4554-9b27-1334b70ef872-kube-api-access-vkgbd" (OuterVolumeSpecName: "kube-api-access-vkgbd") pod "1d76b81c-875e-4554-9b27-1334b70ef872" (UID: "1d76b81c-875e-4554-9b27-1334b70ef872"). InnerVolumeSpecName "kube-api-access-vkgbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.470085 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71b15da-cc9c-4358-a95d-36baef7254e4-kube-api-access-hnlws" (OuterVolumeSpecName: "kube-api-access-hnlws") pod "f71b15da-cc9c-4358-a95d-36baef7254e4" (UID: "f71b15da-cc9c-4358-a95d-36baef7254e4"). InnerVolumeSpecName "kube-api-access-hnlws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.557939 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d76b81c-875e-4554-9b27-1334b70ef872-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.557974 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8xjs\" (UniqueName: \"kubernetes.io/projected/fba76450-d33c-46a2-ab88-77f4390f174e-kube-api-access-p8xjs\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.557986 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkgbd\" (UniqueName: \"kubernetes.io/projected/1d76b81c-875e-4554-9b27-1334b70ef872-kube-api-access-vkgbd\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.557997 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fba76450-d33c-46a2-ab88-77f4390f174e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.558008 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f71b15da-cc9c-4358-a95d-36baef7254e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:56 crc kubenswrapper[4904]: I1121 13:54:56.558020 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnlws\" (UniqueName: \"kubernetes.io/projected/f71b15da-cc9c-4358-a95d-36baef7254e4-kube-api-access-hnlws\") on node \"crc\" DevicePath \"\"" Nov 21 13:54:57 crc kubenswrapper[4904]: I1121 13:54:57.390930 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"9fe8ef9ea14bd8633651b41bbf597f1003708ce8d51761639aa27754cd80e0b1"} Nov 21 13:54:57 crc kubenswrapper[4904]: I1121 13:54:57.393975 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-13a7-account-create-qbk7f" Nov 21 13:54:57 crc kubenswrapper[4904]: I1121 13:54:57.394614 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v45q6" event={"ID":"7ccebc8d-97ab-4df0-be88-3e6147a45b7a","Type":"ContainerStarted","Data":"13eca21696bfd43d8c0a5eb2d1633e75c3b876688948eccbd3b1f49182d2d886"} Nov 21 13:54:57 crc kubenswrapper[4904]: I1121 13:54:57.416426 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-v45q6" podStartSLOduration=2.6734375249999998 podStartE2EDuration="25.416411859s" podCreationTimestamp="2025-11-21 13:54:32 +0000 UTC" firstStartedPulling="2025-11-21 13:54:33.181219272 +0000 UTC m=+1347.302751814" lastFinishedPulling="2025-11-21 13:54:55.924193576 +0000 UTC m=+1370.045726148" observedRunningTime="2025-11-21 13:54:57.410770222 +0000 UTC m=+1371.532302774" watchObservedRunningTime="2025-11-21 13:54:57.416411859 +0000 UTC m=+1371.537944411" Nov 21 13:54:58 crc kubenswrapper[4904]: I1121 13:54:58.407703 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"bc570740f039bbc6cb15c15cbe3d82c2a7a87fa6fdb6afa2b582627c3f2da478"} Nov 21 13:54:58 crc kubenswrapper[4904]: I1121 13:54:58.408562 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"4c627d0ff33b109fa902ac17efb0a09f14885604640e19a68855868678761b55"} Nov 21 13:54:58 crc kubenswrapper[4904]: I1121 13:54:58.408580 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"0e630c7d36818105b5988ab1ac5c377ed67a76a3cfd63359c4a7a4372051f724"} Nov 21 13:54:59 crc kubenswrapper[4904]: I1121 13:54:59.423146 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"7335f31cfc5ec183d4ea7588054404dc28d6d608d2fb7a9238a30a7c9fbaddb1"} Nov 21 13:55:00 crc kubenswrapper[4904]: I1121 13:55:00.452473 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"804be691-8422-4cf8-bfc1-47a1f3c02294","Type":"ContainerStarted","Data":"77c9751af55c0e6a216b37c80a0ca766091b6738d6b9ba664d4792957385ac73"} Nov 21 13:55:01 crc kubenswrapper[4904]: I1121 13:55:01.469193 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"804be691-8422-4cf8-bfc1-47a1f3c02294","Type":"ContainerStarted","Data":"77c0fd55e7579f2ce80d00e57fa442f088074e265b58e114dd47bcbfc3a2fabe"} Nov 21 13:55:01 crc kubenswrapper[4904]: I1121 13:55:01.485431 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"29f35f90e1d28c91bb0dc634b8cc7caf5384ba72f4733a6e56813e68d94e0468"} Nov 21 13:55:01 crc kubenswrapper[4904]: I1121 13:55:01.485496 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"3f09e6c1bb7f403d001bfdad6bcfeb1db7ca9b9874b9e5e8f54f4a210abf6e5f"} Nov 21 13:55:01 crc kubenswrapper[4904]: I1121 13:55:01.496882 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.496866399 podStartE2EDuration="26.496866399s" podCreationTimestamp="2025-11-21 13:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:01.493956268 +0000 UTC m=+1375.615488840" watchObservedRunningTime="2025-11-21 13:55:01.496866399 +0000 UTC m=+1375.618398951" Nov 21 13:55:03 crc kubenswrapper[4904]: I1121 13:55:03.520605 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"b39e17d67d68c8c0052bce1f454e27f1d391c8baa37d43b924745794ad016811"} Nov 21 13:55:04 crc kubenswrapper[4904]: I1121 13:55:04.562710 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"deb0f42c916244a56799f4edf5e8489bac009159c7bdb309684a3448ec3cc71a"} Nov 21 13:55:04 crc kubenswrapper[4904]: I1121 13:55:04.563231 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"2f682a1556b8d17568e4f21737ca9eeab561fb269eb2fa01f897e2d12a6a3c2b"} Nov 21 13:55:04 crc kubenswrapper[4904]: I1121 13:55:04.563246 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"d83b25a410a1d0075c0e0924176464cacf478f701188d583215ff966d3a67afe"} Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.584421 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae","Type":"ContainerStarted","Data":"905bfe033f9e91d8e41fe48b0ed90f8212013ca27884d8f59c31de4f681f94ce"} Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.644691 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.891704256 podStartE2EDuration="1m2.64462829s" podCreationTimestamp="2025-11-21 13:54:03 +0000 UTC" firstStartedPulling="2025-11-21 13:54:38.09899577 +0000 UTC m=+1352.220528322" lastFinishedPulling="2025-11-21 13:54:59.851919804 +0000 UTC m=+1373.973452356" observedRunningTime="2025-11-21 13:55:05.632848254 +0000 UTC m=+1379.754380836" watchObservedRunningTime="2025-11-21 13:55:05.64462829 +0000 UTC m=+1379.766160842" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.959602 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9t5vv"] Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960114 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d76b81c-875e-4554-9b27-1334b70ef872" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960135 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d76b81c-875e-4554-9b27-1334b70ef872" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960156 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c9d65bf-5d02-45be-bc7b-999f9f59149d" containerName="ovn-config" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960163 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c9d65bf-5d02-45be-bc7b-999f9f59149d" containerName="ovn-config" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960177 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8852fc8c-175e-4cf7-854d-230dd3c32d08" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960184 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8852fc8c-175e-4cf7-854d-230dd3c32d08" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960195 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9cc95e-4ef2-44ef-a1e0-14a09425771b" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960201 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9cc95e-4ef2-44ef-a1e0-14a09425771b" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960217 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ad7f52-daab-4bea-a519-5adf0f636168" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960230 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ad7f52-daab-4bea-a519-5adf0f636168" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960244 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba76450-d33c-46a2-ab88-77f4390f174e" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960251 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba76450-d33c-46a2-ab88-77f4390f174e" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960264 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f4eaf0-e81c-4a85-a91a-7ca22ced143c" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960270 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f4eaf0-e81c-4a85-a91a-7ca22ced143c" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960282 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71b15da-cc9c-4358-a95d-36baef7254e4" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960288 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71b15da-cc9c-4358-a95d-36baef7254e4" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: E1121 13:55:05.960301 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90da1076-1698-403c-9b8b-6a93b89c47cf" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960307 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="90da1076-1698-403c-9b8b-6a93b89c47cf" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960494 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71b15da-cc9c-4358-a95d-36baef7254e4" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960510 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8852fc8c-175e-4cf7-854d-230dd3c32d08" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960525 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba76450-d33c-46a2-ab88-77f4390f174e" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960546 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9cc95e-4ef2-44ef-a1e0-14a09425771b" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960556 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="90da1076-1698-403c-9b8b-6a93b89c47cf" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960573 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f4eaf0-e81c-4a85-a91a-7ca22ced143c" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960595 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ad7f52-daab-4bea-a519-5adf0f636168" containerName="mariadb-database-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960607 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d76b81c-875e-4554-9b27-1334b70ef872" containerName="mariadb-account-create" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.960622 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c9d65bf-5d02-45be-bc7b-999f9f59149d" containerName="ovn-config" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.961796 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:05 crc kubenswrapper[4904]: I1121 13:55:05.965180 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.037642 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9t5vv"] Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.102855 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.103071 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6txb\" (UniqueName: \"kubernetes.io/projected/2332c316-21a9-4575-a30c-9eb5b4e12a99-kube-api-access-d6txb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.103217 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.103467 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-config\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.103766 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.103971 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.205887 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.206783 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.207165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.207425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6txb\" (UniqueName: \"kubernetes.io/projected/2332c316-21a9-4575-a30c-9eb5b4e12a99-kube-api-access-d6txb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.207574 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.207774 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-config\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.208055 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.208359 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.209030 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-config\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.209436 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.210074 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-svc\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.236794 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6txb\" (UniqueName: \"kubernetes.io/projected/2332c316-21a9-4575-a30c-9eb5b4e12a99-kube-api-access-d6txb\") pod \"dnsmasq-dns-764c5664d7-9t5vv\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.286744 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.373343 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.373444 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.385834 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.610959 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 21 13:55:06 crc kubenswrapper[4904]: I1121 13:55:06.856596 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9t5vv"] Nov 21 13:55:07 crc kubenswrapper[4904]: I1121 13:55:07.615294 4904 generic.go:334] "Generic (PLEG): container finished" podID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerID="f27d2a7be2991ef0e1414c2ea9bd32e29b97a68acd93974e7dc233f835f3978b" exitCode=0 Nov 21 13:55:07 crc kubenswrapper[4904]: I1121 13:55:07.617377 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" event={"ID":"2332c316-21a9-4575-a30c-9eb5b4e12a99","Type":"ContainerDied","Data":"f27d2a7be2991ef0e1414c2ea9bd32e29b97a68acd93974e7dc233f835f3978b"} Nov 21 13:55:07 crc kubenswrapper[4904]: I1121 13:55:07.617413 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" event={"ID":"2332c316-21a9-4575-a30c-9eb5b4e12a99","Type":"ContainerStarted","Data":"2418377d7983633365d8eef60d91c74f09de5b565b2691daac2149e9a2ff8a1a"} Nov 21 13:55:08 crc kubenswrapper[4904]: I1121 13:55:08.628988 4904 generic.go:334] "Generic (PLEG): container finished" podID="7ccebc8d-97ab-4df0-be88-3e6147a45b7a" containerID="13eca21696bfd43d8c0a5eb2d1633e75c3b876688948eccbd3b1f49182d2d886" exitCode=0 Nov 21 13:55:08 crc kubenswrapper[4904]: I1121 13:55:08.629080 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v45q6" event={"ID":"7ccebc8d-97ab-4df0-be88-3e6147a45b7a","Type":"ContainerDied","Data":"13eca21696bfd43d8c0a5eb2d1633e75c3b876688948eccbd3b1f49182d2d886"} Nov 21 13:55:08 crc kubenswrapper[4904]: I1121 13:55:08.634585 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" event={"ID":"2332c316-21a9-4575-a30c-9eb5b4e12a99","Type":"ContainerStarted","Data":"90fac2b40ce68a124bf5f367b2c4d3e2bc034bb658ba521635b3f168d55928a5"} Nov 21 13:55:08 crc kubenswrapper[4904]: I1121 13:55:08.634969 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:08 crc kubenswrapper[4904]: I1121 13:55:08.672326 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" podStartSLOduration=3.672296352 podStartE2EDuration="3.672296352s" podCreationTimestamp="2025-11-21 13:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:08.67178654 +0000 UTC m=+1382.793319102" watchObservedRunningTime="2025-11-21 13:55:08.672296352 +0000 UTC m=+1382.793828904" Nov 21 13:55:09 crc kubenswrapper[4904]: I1121 13:55:09.653340 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ssmh2" event={"ID":"34287b99-9180-4bba-ae2d-bbe3eda9056f","Type":"ContainerStarted","Data":"a4a023eff86293a3fc6300e72d96113a1137ba9c84da6ba2ad22a6a150b2c703"} Nov 21 13:55:09 crc kubenswrapper[4904]: I1121 13:55:09.680054 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ssmh2" podStartSLOduration=2.764015105 podStartE2EDuration="30.680023533s" podCreationTimestamp="2025-11-21 13:54:39 +0000 UTC" firstStartedPulling="2025-11-21 13:54:41.044512506 +0000 UTC m=+1355.166045058" lastFinishedPulling="2025-11-21 13:55:08.960520944 +0000 UTC m=+1383.082053486" observedRunningTime="2025-11-21 13:55:09.67785728 +0000 UTC m=+1383.799389912" watchObservedRunningTime="2025-11-21 13:55:09.680023533 +0000 UTC m=+1383.801556095" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.172236 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v45q6" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.206772 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-db-sync-config-data\") pod \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.207122 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-combined-ca-bundle\") pod \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.207297 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrxgp\" (UniqueName: \"kubernetes.io/projected/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-kube-api-access-vrxgp\") pod \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.207472 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-config-data\") pod \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\" (UID: \"7ccebc8d-97ab-4df0-be88-3e6147a45b7a\") " Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.213979 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-kube-api-access-vrxgp" (OuterVolumeSpecName: "kube-api-access-vrxgp") pod "7ccebc8d-97ab-4df0-be88-3e6147a45b7a" (UID: "7ccebc8d-97ab-4df0-be88-3e6147a45b7a"). InnerVolumeSpecName "kube-api-access-vrxgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.222881 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7ccebc8d-97ab-4df0-be88-3e6147a45b7a" (UID: "7ccebc8d-97ab-4df0-be88-3e6147a45b7a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.251821 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ccebc8d-97ab-4df0-be88-3e6147a45b7a" (UID: "7ccebc8d-97ab-4df0-be88-3e6147a45b7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.298906 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-config-data" (OuterVolumeSpecName: "config-data") pod "7ccebc8d-97ab-4df0-be88-3e6147a45b7a" (UID: "7ccebc8d-97ab-4df0-be88-3e6147a45b7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.315789 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.315873 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrxgp\" (UniqueName: \"kubernetes.io/projected/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-kube-api-access-vrxgp\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.315897 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.315917 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7ccebc8d-97ab-4df0-be88-3e6147a45b7a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.673565 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v45q6" event={"ID":"7ccebc8d-97ab-4df0-be88-3e6147a45b7a","Type":"ContainerDied","Data":"d3275abe3191e07ac8419594380227edd605727246380ebc401de628db977dd1"} Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.674225 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3275abe3191e07ac8419594380227edd605727246380ebc401de628db977dd1" Nov 21 13:55:10 crc kubenswrapper[4904]: I1121 13:55:10.673927 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v45q6" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.135935 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9t5vv"] Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.136300 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerName="dnsmasq-dns" containerID="cri-o://90fac2b40ce68a124bf5f367b2c4d3e2bc034bb658ba521635b3f168d55928a5" gracePeriod=10 Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.176846 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wl67h"] Nov 21 13:55:11 crc kubenswrapper[4904]: E1121 13:55:11.177507 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ccebc8d-97ab-4df0-be88-3e6147a45b7a" containerName="glance-db-sync" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.177532 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ccebc8d-97ab-4df0-be88-3e6147a45b7a" containerName="glance-db-sync" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.177784 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ccebc8d-97ab-4df0-be88-3e6147a45b7a" containerName="glance-db-sync" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.178981 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.223061 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wl67h"] Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.233614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.233739 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-config\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.233771 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.233830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.233873 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn2sz\" (UniqueName: \"kubernetes.io/projected/2e799bc1-49c9-425e-a256-532bb31f2d58-kube-api-access-fn2sz\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.233902 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.336285 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.336925 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn2sz\" (UniqueName: \"kubernetes.io/projected/2e799bc1-49c9-425e-a256-532bb31f2d58-kube-api-access-fn2sz\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.336962 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.337040 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.337139 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-config\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.337176 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.337557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.338094 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.338182 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.338257 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-config\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.338463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.363371 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn2sz\" (UniqueName: \"kubernetes.io/projected/2e799bc1-49c9-425e-a256-532bb31f2d58-kube-api-access-fn2sz\") pod \"dnsmasq-dns-74f6bcbc87-wl67h\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.611571 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.699482 4904 generic.go:334] "Generic (PLEG): container finished" podID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerID="90fac2b40ce68a124bf5f367b2c4d3e2bc034bb658ba521635b3f168d55928a5" exitCode=0 Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.700134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" event={"ID":"2332c316-21a9-4575-a30c-9eb5b4e12a99","Type":"ContainerDied","Data":"90fac2b40ce68a124bf5f367b2c4d3e2bc034bb658ba521635b3f168d55928a5"} Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.830271 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.864262 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-svc\") pod \"2332c316-21a9-4575-a30c-9eb5b4e12a99\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.864531 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-swift-storage-0\") pod \"2332c316-21a9-4575-a30c-9eb5b4e12a99\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.864572 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-nb\") pod \"2332c316-21a9-4575-a30c-9eb5b4e12a99\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.864676 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-sb\") pod \"2332c316-21a9-4575-a30c-9eb5b4e12a99\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.864706 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6txb\" (UniqueName: \"kubernetes.io/projected/2332c316-21a9-4575-a30c-9eb5b4e12a99-kube-api-access-d6txb\") pod \"2332c316-21a9-4575-a30c-9eb5b4e12a99\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.864738 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-config\") pod \"2332c316-21a9-4575-a30c-9eb5b4e12a99\" (UID: \"2332c316-21a9-4575-a30c-9eb5b4e12a99\") " Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.890622 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2332c316-21a9-4575-a30c-9eb5b4e12a99-kube-api-access-d6txb" (OuterVolumeSpecName: "kube-api-access-d6txb") pod "2332c316-21a9-4575-a30c-9eb5b4e12a99" (UID: "2332c316-21a9-4575-a30c-9eb5b4e12a99"). InnerVolumeSpecName "kube-api-access-d6txb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.934846 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-config" (OuterVolumeSpecName: "config") pod "2332c316-21a9-4575-a30c-9eb5b4e12a99" (UID: "2332c316-21a9-4575-a30c-9eb5b4e12a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.944298 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2332c316-21a9-4575-a30c-9eb5b4e12a99" (UID: "2332c316-21a9-4575-a30c-9eb5b4e12a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.953955 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2332c316-21a9-4575-a30c-9eb5b4e12a99" (UID: "2332c316-21a9-4575-a30c-9eb5b4e12a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.967292 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6txb\" (UniqueName: \"kubernetes.io/projected/2332c316-21a9-4575-a30c-9eb5b4e12a99-kube-api-access-d6txb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.967323 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.967363 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.967375 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.972536 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2332c316-21a9-4575-a30c-9eb5b4e12a99" (UID: "2332c316-21a9-4575-a30c-9eb5b4e12a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:11 crc kubenswrapper[4904]: I1121 13:55:11.976867 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2332c316-21a9-4575-a30c-9eb5b4e12a99" (UID: "2332c316-21a9-4575-a30c-9eb5b4e12a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.069358 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.069406 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2332c316-21a9-4575-a30c-9eb5b4e12a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:12 crc kubenswrapper[4904]: W1121 13:55:12.168180 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e799bc1_49c9_425e_a256_532bb31f2d58.slice/crio-56617b5c784ba9507cb380f9507112bf037b4df83c635105efb1766655adb6cb WatchSource:0}: Error finding container 56617b5c784ba9507cb380f9507112bf037b4df83c635105efb1766655adb6cb: Status 404 returned error can't find the container with id 56617b5c784ba9507cb380f9507112bf037b4df83c635105efb1766655adb6cb Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.172272 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wl67h"] Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.713087 4904 generic.go:334] "Generic (PLEG): container finished" podID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerID="5e02908275fb9ec5f7fcf594e3adc2b2af8cbc11993644038cca246d622a33fc" exitCode=0 Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.713202 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" event={"ID":"2e799bc1-49c9-425e-a256-532bb31f2d58","Type":"ContainerDied","Data":"5e02908275fb9ec5f7fcf594e3adc2b2af8cbc11993644038cca246d622a33fc"} Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.713644 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" event={"ID":"2e799bc1-49c9-425e-a256-532bb31f2d58","Type":"ContainerStarted","Data":"56617b5c784ba9507cb380f9507112bf037b4df83c635105efb1766655adb6cb"} Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.720352 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" event={"ID":"2332c316-21a9-4575-a30c-9eb5b4e12a99","Type":"ContainerDied","Data":"2418377d7983633365d8eef60d91c74f09de5b565b2691daac2149e9a2ff8a1a"} Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.720453 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-9t5vv" Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.720498 4904 scope.go:117] "RemoveContainer" containerID="90fac2b40ce68a124bf5f367b2c4d3e2bc034bb658ba521635b3f168d55928a5" Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.900334 4904 scope.go:117] "RemoveContainer" containerID="f27d2a7be2991ef0e1414c2ea9bd32e29b97a68acd93974e7dc233f835f3978b" Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.926887 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9t5vv"] Nov 21 13:55:12 crc kubenswrapper[4904]: I1121 13:55:12.939947 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-9t5vv"] Nov 21 13:55:13 crc kubenswrapper[4904]: I1121 13:55:13.759702 4904 generic.go:334] "Generic (PLEG): container finished" podID="34287b99-9180-4bba-ae2d-bbe3eda9056f" containerID="a4a023eff86293a3fc6300e72d96113a1137ba9c84da6ba2ad22a6a150b2c703" exitCode=0 Nov 21 13:55:13 crc kubenswrapper[4904]: I1121 13:55:13.760362 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ssmh2" event={"ID":"34287b99-9180-4bba-ae2d-bbe3eda9056f","Type":"ContainerDied","Data":"a4a023eff86293a3fc6300e72d96113a1137ba9c84da6ba2ad22a6a150b2c703"} Nov 21 13:55:13 crc kubenswrapper[4904]: I1121 13:55:13.766538 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" event={"ID":"2e799bc1-49c9-425e-a256-532bb31f2d58","Type":"ContainerStarted","Data":"af772aae1994d70c68ad6d29bf033a6fc3d480af9604a16194b78fa3eeb08797"} Nov 21 13:55:13 crc kubenswrapper[4904]: I1121 13:55:13.767314 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:13 crc kubenswrapper[4904]: I1121 13:55:13.839460 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" podStartSLOduration=2.839433637 podStartE2EDuration="2.839433637s" podCreationTimestamp="2025-11-21 13:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:13.829889284 +0000 UTC m=+1387.951421826" watchObservedRunningTime="2025-11-21 13:55:13.839433637 +0000 UTC m=+1387.960966209" Nov 21 13:55:14 crc kubenswrapper[4904]: I1121 13:55:14.526175 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" path="/var/lib/kubelet/pods/2332c316-21a9-4575-a30c-9eb5b4e12a99/volumes" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.251854 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.349313 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkh7f\" (UniqueName: \"kubernetes.io/projected/34287b99-9180-4bba-ae2d-bbe3eda9056f-kube-api-access-bkh7f\") pod \"34287b99-9180-4bba-ae2d-bbe3eda9056f\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.349896 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-combined-ca-bundle\") pod \"34287b99-9180-4bba-ae2d-bbe3eda9056f\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.349978 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-config-data\") pod \"34287b99-9180-4bba-ae2d-bbe3eda9056f\" (UID: \"34287b99-9180-4bba-ae2d-bbe3eda9056f\") " Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.358683 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34287b99-9180-4bba-ae2d-bbe3eda9056f-kube-api-access-bkh7f" (OuterVolumeSpecName: "kube-api-access-bkh7f") pod "34287b99-9180-4bba-ae2d-bbe3eda9056f" (UID: "34287b99-9180-4bba-ae2d-bbe3eda9056f"). InnerVolumeSpecName "kube-api-access-bkh7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.390572 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34287b99-9180-4bba-ae2d-bbe3eda9056f" (UID: "34287b99-9180-4bba-ae2d-bbe3eda9056f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.414096 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-config-data" (OuterVolumeSpecName: "config-data") pod "34287b99-9180-4bba-ae2d-bbe3eda9056f" (UID: "34287b99-9180-4bba-ae2d-bbe3eda9056f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.452550 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkh7f\" (UniqueName: \"kubernetes.io/projected/34287b99-9180-4bba-ae2d-bbe3eda9056f-kube-api-access-bkh7f\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.452593 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.452607 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34287b99-9180-4bba-ae2d-bbe3eda9056f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.796177 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ssmh2" event={"ID":"34287b99-9180-4bba-ae2d-bbe3eda9056f","Type":"ContainerDied","Data":"e5cdcfc10c6767c3b640ff0be7b0470d08828c00877915fced66249fcb0ffca9"} Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.796233 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5cdcfc10c6767c3b640ff0be7b0470d08828c00877915fced66249fcb0ffca9" Nov 21 13:55:15 crc kubenswrapper[4904]: I1121 13:55:15.796290 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ssmh2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.101848 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h5pb2"] Nov 21 13:55:16 crc kubenswrapper[4904]: E1121 13:55:16.102300 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerName="dnsmasq-dns" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.102321 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerName="dnsmasq-dns" Nov 21 13:55:16 crc kubenswrapper[4904]: E1121 13:55:16.102363 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34287b99-9180-4bba-ae2d-bbe3eda9056f" containerName="keystone-db-sync" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.102370 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="34287b99-9180-4bba-ae2d-bbe3eda9056f" containerName="keystone-db-sync" Nov 21 13:55:16 crc kubenswrapper[4904]: E1121 13:55:16.102381 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerName="init" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.102389 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerName="init" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.102600 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2332c316-21a9-4575-a30c-9eb5b4e12a99" containerName="dnsmasq-dns" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.102620 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="34287b99-9180-4bba-ae2d-bbe3eda9056f" containerName="keystone-db-sync" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.103340 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.106323 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.106846 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.107467 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fbg72" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.107640 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.115312 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.119099 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h5pb2"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.156144 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wl67h"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.156411 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerName="dnsmasq-dns" containerID="cri-o://af772aae1994d70c68ad6d29bf033a6fc3d480af9604a16194b78fa3eeb08797" gracePeriod=10 Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.168483 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-combined-ca-bundle\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.168567 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-config-data\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.168692 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-credential-keys\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.168719 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jlz8\" (UniqueName: \"kubernetes.io/projected/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-kube-api-access-6jlz8\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.168805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-fernet-keys\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.168862 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-scripts\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.215467 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-knpp8"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.217591 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.253017 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jpqkx"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.255123 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.258038 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-bnkwf" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.268251 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.270992 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-knpp8"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-combined-ca-bundle\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281311 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-config-data\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281528 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-credential-keys\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281560 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jlz8\" (UniqueName: \"kubernetes.io/projected/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-kube-api-access-6jlz8\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281607 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swrhk\" (UniqueName: \"kubernetes.io/projected/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-kube-api-access-swrhk\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281680 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281710 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281748 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-fernet-keys\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281835 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-scripts\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-config\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.281914 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.291445 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-combined-ca-bundle\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.291497 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-config-data\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.301292 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-scripts\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.301710 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-fernet-keys\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.314311 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jlz8\" (UniqueName: \"kubernetes.io/projected/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-kube-api-access-6jlz8\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.316909 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jpqkx"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.367199 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-credential-keys\") pod \"keystone-bootstrap-h5pb2\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.403354 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qml\" (UniqueName: \"kubernetes.io/projected/517c919d-5531-457e-ae70-aa39ea0282a9-kube-api-access-g6qml\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.403439 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-config\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.403516 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.405437 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.405793 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swrhk\" (UniqueName: \"kubernetes.io/projected/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-kube-api-access-swrhk\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.405853 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.405905 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.405935 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-combined-ca-bundle\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.405976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.406065 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-config-data\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.407320 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.407893 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-svc\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.408363 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-config\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.410421 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.433421 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qg6bd"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.433624 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.435304 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.440099 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.440356 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhklw" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.440607 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.452569 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swrhk\" (UniqueName: \"kubernetes.io/projected/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-kube-api-access-swrhk\") pod \"dnsmasq-dns-847c4cc679-knpp8\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.454255 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-6plvv"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.455818 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.463547 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.463788 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.463917 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t9vqr" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.498603 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qg6bd"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.510970 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-combined-ca-bundle\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.511055 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-config-data\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.511096 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6qml\" (UniqueName: \"kubernetes.io/projected/517c919d-5531-457e-ae70-aa39ea0282a9-kube-api-access-g6qml\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.511131 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46d63715-407d-4908-a38f-5f6fd76729db-etc-machine-id\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.511159 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-config-data\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.511761 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6plvv"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.514618 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-db-sync-config-data\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.514678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-combined-ca-bundle\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.514733 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-874vl\" (UniqueName: \"kubernetes.io/projected/46d63715-407d-4908-a38f-5f6fd76729db-kube-api-access-874vl\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.514773 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-scripts\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.535071 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-config-data\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.583295 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6qml\" (UniqueName: \"kubernetes.io/projected/517c919d-5531-457e-ae70-aa39ea0282a9-kube-api-access-g6qml\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.592398 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-combined-ca-bundle\") pod \"heat-db-sync-jpqkx\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.615066 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.715943 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46d63715-407d-4908-a38f-5f6fd76729db-etc-machine-id\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.716352 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-config-data\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.716493 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-db-sync-config-data\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.716563 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-combined-ca-bundle\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.716701 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-874vl\" (UniqueName: \"kubernetes.io/projected/46d63715-407d-4908-a38f-5f6fd76729db-kube-api-access-874vl\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.717371 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46d63715-407d-4908-a38f-5f6fd76729db-etc-machine-id\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.719616 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-scripts\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.736606 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jpqkx" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.752317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-db-sync-config-data\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.752801 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-scripts\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.753078 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-config-data\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.758245 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-combined-ca-bundle\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.765784 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-874vl\" (UniqueName: \"kubernetes.io/projected/46d63715-407d-4908-a38f-5f6fd76729db-kube-api-access-874vl\") pod \"cinder-db-sync-qg6bd\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.766858 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2s659"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.768643 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.771702 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.779127 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-t2ml2" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.779807 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.788747 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2s659"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.823743 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-config-data\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.823833 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-combined-ca-bundle\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.823889 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zptmh\" (UniqueName: \"kubernetes.io/projected/cf7baa95-27e6-491e-a934-d79a287ca62d-kube-api-access-zptmh\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.823950 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-combined-ca-bundle\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.824010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-config\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.824077 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv5c9\" (UniqueName: \"kubernetes.io/projected/20b427a9-510f-45b1-82bc-cf85bb44932b-kube-api-access-jv5c9\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.824237 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-scripts\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.824361 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b427a9-510f-45b1-82bc-cf85bb44932b-logs\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.831270 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lj7p5"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.837619 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.842368 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.844271 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mmtwc" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.846210 4904 generic.go:334] "Generic (PLEG): container finished" podID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerID="af772aae1994d70c68ad6d29bf033a6fc3d480af9604a16194b78fa3eeb08797" exitCode=0 Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.846247 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" event={"ID":"2e799bc1-49c9-425e-a256-532bb31f2d58","Type":"ContainerDied","Data":"af772aae1994d70c68ad6d29bf033a6fc3d480af9604a16194b78fa3eeb08797"} Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.856394 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lj7p5"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.898217 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-knpp8"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.920837 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vmk6g"] Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.924150 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951429 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scdpq\" (UniqueName: \"kubernetes.io/projected/6201ad46-9eaf-4b17-b40f-e31756dea737-kube-api-access-scdpq\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951559 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-db-sync-config-data\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-combined-ca-bundle\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951744 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zptmh\" (UniqueName: \"kubernetes.io/projected/cf7baa95-27e6-491e-a934-d79a287ca62d-kube-api-access-zptmh\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951776 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-combined-ca-bundle\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951836 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-config\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.951967 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv5c9\" (UniqueName: \"kubernetes.io/projected/20b427a9-510f-45b1-82bc-cf85bb44932b-kube-api-access-jv5c9\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952004 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952058 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcz2k\" (UniqueName: \"kubernetes.io/projected/73cc8d35-c5fc-4ed7-a296-e738982614b5-kube-api-access-kcz2k\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952089 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-scripts\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952197 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-config\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b427a9-510f-45b1-82bc-cf85bb44932b-logs\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952306 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-combined-ca-bundle\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952403 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-config-data\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.952447 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.958113 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b427a9-510f-45b1-82bc-cf85bb44932b-logs\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.958399 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-combined-ca-bundle\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.963183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-config\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.964538 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-combined-ca-bundle\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.970106 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-scripts\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.972152 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-config-data\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.985302 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv5c9\" (UniqueName: \"kubernetes.io/projected/20b427a9-510f-45b1-82bc-cf85bb44932b-kube-api-access-jv5c9\") pod \"placement-db-sync-2s659\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " pod="openstack/placement-db-sync-2s659" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.986298 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zptmh\" (UniqueName: \"kubernetes.io/projected/cf7baa95-27e6-491e-a934-d79a287ca62d-kube-api-access-zptmh\") pod \"neutron-db-sync-6plvv\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:16 crc kubenswrapper[4904]: I1121 13:55:16.992419 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vmk6g"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.053793 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.056947 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-combined-ca-bundle\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057212 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scdpq\" (UniqueName: \"kubernetes.io/projected/6201ad46-9eaf-4b17-b40f-e31756dea737-kube-api-access-scdpq\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057304 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057327 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-db-sync-config-data\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057701 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcz2k\" (UniqueName: \"kubernetes.io/projected/73cc8d35-c5fc-4ed7-a296-e738982614b5-kube-api-access-kcz2k\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-config\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.057948 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.058922 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-config\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.059588 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.059717 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.060602 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.061392 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.062683 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.064925 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.066566 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.067883 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.069115 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-combined-ca-bundle\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.069813 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-db-sync-config-data\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.095601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scdpq\" (UniqueName: \"kubernetes.io/projected/6201ad46-9eaf-4b17-b40f-e31756dea737-kube-api-access-scdpq\") pod \"barbican-db-sync-lj7p5\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.097974 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcz2k\" (UniqueName: \"kubernetes.io/projected/73cc8d35-c5fc-4ed7-a296-e738982614b5-kube-api-access-kcz2k\") pod \"dnsmasq-dns-785d8bcb8c-vmk6g\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.112647 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.124638 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2s659" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.149623 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159548 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn2sz\" (UniqueName: \"kubernetes.io/projected/2e799bc1-49c9-425e-a256-532bb31f2d58-kube-api-access-fn2sz\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159615 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-config\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159666 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-svc\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159843 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-config-data\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159877 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4mvj\" (UniqueName: \"kubernetes.io/projected/a896cd91-f67c-48a4-8791-53dc03090b28-kube-api-access-g4mvj\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159911 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-scripts\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.159962 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-run-httpd\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.160003 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.160042 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-log-httpd\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.160069 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.179155 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e799bc1-49c9-425e-a256-532bb31f2d58-kube-api-access-fn2sz" (OuterVolumeSpecName: "kube-api-access-fn2sz") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "kube-api-access-fn2sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.202754 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.245453 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-config" (OuterVolumeSpecName: "config") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.279126 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-nb\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.279208 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.279687 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.279876 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-sb\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.281560 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.281966 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-run-httpd\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282143 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282248 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-log-httpd\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282305 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-config-data\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282562 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4mvj\" (UniqueName: \"kubernetes.io/projected/a896cd91-f67c-48a4-8791-53dc03090b28-kube-api-access-g4mvj\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282678 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-scripts\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282767 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn2sz\" (UniqueName: \"kubernetes.io/projected/2e799bc1-49c9-425e-a256-532bb31f2d58-kube-api-access-fn2sz\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282782 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282791 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282674 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-run-httpd\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.282896 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-log-httpd\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.300156 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-config-data\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.309933 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.310700 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-scripts\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.312588 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.334171 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4mvj\" (UniqueName: \"kubernetes.io/projected/a896cd91-f67c-48a4-8791-53dc03090b28-kube-api-access-g4mvj\") pod \"ceilometer-0\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.386210 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.387143 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0\") pod \"2e799bc1-49c9-425e-a256-532bb31f2d58\" (UID: \"2e799bc1-49c9-425e-a256-532bb31f2d58\") " Nov 21 13:55:17 crc kubenswrapper[4904]: W1121 13:55:17.388088 4904 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2e799bc1-49c9-425e-a256-532bb31f2d58/volumes/kubernetes.io~configmap/dns-swift-storage-0 Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.388112 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.405052 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.414492 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h5pb2"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.427433 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.433071 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2e799bc1-49c9-425e-a256-532bb31f2d58" (UID: "2e799bc1-49c9-425e-a256-532bb31f2d58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.491157 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.493138 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.493155 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2e799bc1-49c9-425e-a256-532bb31f2d58-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.855857 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jpqkx"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.864943 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h5pb2" event={"ID":"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c","Type":"ContainerStarted","Data":"33b80b32c15553705770185a11039ca099743a6432758a78e91caac7ae8b8c5e"} Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.864995 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h5pb2" event={"ID":"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c","Type":"ContainerStarted","Data":"e4bcf30cc6e56d77b31c5b2bcd85d4d32eca7e4d851c1e2b0d10191d77174dac"} Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.867599 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" event={"ID":"2e799bc1-49c9-425e-a256-532bb31f2d58","Type":"ContainerDied","Data":"56617b5c784ba9507cb380f9507112bf037b4df83c635105efb1766655adb6cb"} Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.867640 4904 scope.go:117] "RemoveContainer" containerID="af772aae1994d70c68ad6d29bf033a6fc3d480af9604a16194b78fa3eeb08797" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.867794 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-wl67h" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.868328 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-knpp8"] Nov 21 13:55:17 crc kubenswrapper[4904]: W1121 13:55:17.874429 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ccb687_9ef0_4a6b_8f9b_97d7c7e69a5d.slice/crio-d2af73d091fd1d676106c4c8b263d8042ec5e11b1e537d067d61733fc3b13131 WatchSource:0}: Error finding container d2af73d091fd1d676106c4c8b263d8042ec5e11b1e537d067d61733fc3b13131: Status 404 returned error can't find the container with id d2af73d091fd1d676106c4c8b263d8042ec5e11b1e537d067d61733fc3b13131 Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.893914 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h5pb2" podStartSLOduration=1.893894824 podStartE2EDuration="1.893894824s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:17.892169511 +0000 UTC m=+1392.013702073" watchObservedRunningTime="2025-11-21 13:55:17.893894824 +0000 UTC m=+1392.015427376" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.925044 4904 scope.go:117] "RemoveContainer" containerID="5e02908275fb9ec5f7fcf594e3adc2b2af8cbc11993644038cca246d622a33fc" Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.949506 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wl67h"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.967933 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-wl67h"] Nov 21 13:55:17 crc kubenswrapper[4904]: I1121 13:55:17.992121 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qg6bd"] Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.036768 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-6plvv"] Nov 21 13:55:18 crc kubenswrapper[4904]: W1121 13:55:18.091087 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf7baa95_27e6_491e_a934_d79a287ca62d.slice/crio-5cc5cc31d4f8ea6bcc181807800bb33a3b15731f3b0a3f21fba7ab0f374f1df9 WatchSource:0}: Error finding container 5cc5cc31d4f8ea6bcc181807800bb33a3b15731f3b0a3f21fba7ab0f374f1df9: Status 404 returned error can't find the container with id 5cc5cc31d4f8ea6bcc181807800bb33a3b15731f3b0a3f21fba7ab0f374f1df9 Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.173284 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lj7p5"] Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.269906 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2s659"] Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.315939 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vmk6g"] Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.366517 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.556709 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" path="/var/lib/kubelet/pods/2e799bc1-49c9-425e-a256-532bb31f2d58/volumes" Nov 21 13:55:18 crc kubenswrapper[4904]: E1121 13:55:18.639219 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52ccb687_9ef0_4a6b_8f9b_97d7c7e69a5d.slice/crio-conmon-1a4f68d2f86ca731267c4ca966cc9d17428d0400d3630f0ca139ec90a5771eab.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.900882 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2s659" event={"ID":"20b427a9-510f-45b1-82bc-cf85bb44932b","Type":"ContainerStarted","Data":"1de828bfb005e23979262d50bbc1c4b63926725bbd0a0dc2a3bbfdda6da0bd78"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.914834 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jpqkx" event={"ID":"517c919d-5531-457e-ae70-aa39ea0282a9","Type":"ContainerStarted","Data":"91527cd6d4b9d33cbe0ed429468e41538d2457d5df4d903a419bcc454a55dfd0"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.925147 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerStarted","Data":"f1cdde6c8ce5f677d3a23e3f2d6f55ad34ac3d1d8ab4a61c392fefaf21512612"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.927155 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6plvv" event={"ID":"cf7baa95-27e6-491e-a934-d79a287ca62d","Type":"ContainerStarted","Data":"417542c8dded4121e13e726802629777aa7fb90706ee8f4936852ab2a729dea7"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.927183 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6plvv" event={"ID":"cf7baa95-27e6-491e-a934-d79a287ca62d","Type":"ContainerStarted","Data":"5cc5cc31d4f8ea6bcc181807800bb33a3b15731f3b0a3f21fba7ab0f374f1df9"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.929401 4904 generic.go:334] "Generic (PLEG): container finished" podID="52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" containerID="1a4f68d2f86ca731267c4ca966cc9d17428d0400d3630f0ca139ec90a5771eab" exitCode=0 Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.929467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" event={"ID":"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d","Type":"ContainerDied","Data":"1a4f68d2f86ca731267c4ca966cc9d17428d0400d3630f0ca139ec90a5771eab"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.929484 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" event={"ID":"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d","Type":"ContainerStarted","Data":"d2af73d091fd1d676106c4c8b263d8042ec5e11b1e537d067d61733fc3b13131"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.931400 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lj7p5" event={"ID":"6201ad46-9eaf-4b17-b40f-e31756dea737","Type":"ContainerStarted","Data":"65b03ff57782290ab2ff1d22ba1c468839595f066d82a5f38afadba27f1b9043"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.938241 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qg6bd" event={"ID":"46d63715-407d-4908-a38f-5f6fd76729db","Type":"ContainerStarted","Data":"1b43428dfefc73094af84fa81b30675a017729d68477191dec66c71f491aed6c"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.941198 4904 generic.go:334] "Generic (PLEG): container finished" podID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerID="f256bc4a619a3ff5175a5e1501ce758ce4bccb4d61095d4ded51617bef5337d5" exitCode=0 Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.941310 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" event={"ID":"73cc8d35-c5fc-4ed7-a296-e738982614b5","Type":"ContainerDied","Data":"f256bc4a619a3ff5175a5e1501ce758ce4bccb4d61095d4ded51617bef5337d5"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.941348 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" event={"ID":"73cc8d35-c5fc-4ed7-a296-e738982614b5","Type":"ContainerStarted","Data":"366d5f2e765bbe3a762db35376dbe05a8d1f4775ac2359a5aa0c71d4401c534d"} Nov 21 13:55:18 crc kubenswrapper[4904]: I1121 13:55:18.975490 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-6plvv" podStartSLOduration=2.975462233 podStartE2EDuration="2.975462233s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:18.960900739 +0000 UTC m=+1393.082433291" watchObservedRunningTime="2025-11-21 13:55:18.975462233 +0000 UTC m=+1393.096994785" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.593719 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.782712 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.884526 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-sb\") pod \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.885012 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-nb\") pod \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.885223 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swrhk\" (UniqueName: \"kubernetes.io/projected/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-kube-api-access-swrhk\") pod \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.885266 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-swift-storage-0\") pod \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.885288 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-config\") pod \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.885339 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-svc\") pod \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\" (UID: \"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d\") " Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.900127 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-kube-api-access-swrhk" (OuterVolumeSpecName: "kube-api-access-swrhk") pod "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" (UID: "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d"). InnerVolumeSpecName "kube-api-access-swrhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.932314 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" (UID: "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.934822 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" (UID: "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.943998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" (UID: "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.968780 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" (UID: "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.972706 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" event={"ID":"73cc8d35-c5fc-4ed7-a296-e738982614b5","Type":"ContainerStarted","Data":"c40f0c82366703d54d93db604dbaea6f236485ef33c5c853351c10651a5dadcd"} Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.973007 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.980461 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-config" (OuterVolumeSpecName: "config") pod "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" (UID: "52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.981182 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" event={"ID":"52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d","Type":"ContainerDied","Data":"d2af73d091fd1d676106c4c8b263d8042ec5e11b1e537d067d61733fc3b13131"} Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.981260 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-knpp8" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.981308 4904 scope.go:117] "RemoveContainer" containerID="1a4f68d2f86ca731267c4ca966cc9d17428d0400d3630f0ca139ec90a5771eab" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.988276 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.990994 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.991035 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.991440 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.991452 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:19 crc kubenswrapper[4904]: I1121 13:55:19.991466 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swrhk\" (UniqueName: \"kubernetes.io/projected/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d-kube-api-access-swrhk\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:20 crc kubenswrapper[4904]: I1121 13:55:20.003023 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" podStartSLOduration=4.002977596 podStartE2EDuration="4.002977596s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:19.998829915 +0000 UTC m=+1394.120362487" watchObservedRunningTime="2025-11-21 13:55:20.002977596 +0000 UTC m=+1394.124510148" Nov 21 13:55:20 crc kubenswrapper[4904]: I1121 13:55:20.107775 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-knpp8"] Nov 21 13:55:20 crc kubenswrapper[4904]: I1121 13:55:20.118903 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-knpp8"] Nov 21 13:55:20 crc kubenswrapper[4904]: I1121 13:55:20.537982 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" path="/var/lib/kubelet/pods/52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d/volumes" Nov 21 13:55:23 crc kubenswrapper[4904]: I1121 13:55:23.065162 4904 generic.go:334] "Generic (PLEG): container finished" podID="e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" containerID="33b80b32c15553705770185a11039ca099743a6432758a78e91caac7ae8b8c5e" exitCode=0 Nov 21 13:55:23 crc kubenswrapper[4904]: I1121 13:55:23.065354 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h5pb2" event={"ID":"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c","Type":"ContainerDied","Data":"33b80b32c15553705770185a11039ca099743a6432758a78e91caac7ae8b8c5e"} Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.283812 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.326097 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.394351 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-combined-ca-bundle\") pod \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.394501 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-fernet-keys\") pod \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.394554 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-config-data\") pod \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.394647 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-scripts\") pod \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.394737 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-credential-keys\") pod \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.394759 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jlz8\" (UniqueName: \"kubernetes.io/projected/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-kube-api-access-6jlz8\") pod \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\" (UID: \"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c\") " Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.445924 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-scripts" (OuterVolumeSpecName: "scripts") pod "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" (UID: "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.446948 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-kube-api-access-6jlz8" (OuterVolumeSpecName: "kube-api-access-6jlz8") pod "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" (UID: "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c"). InnerVolumeSpecName "kube-api-access-6jlz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.451532 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" (UID: "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.451737 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8qd6n"] Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.451923 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" (UID: "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.452121 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-8qd6n" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" containerID="cri-o://e8157fcf7931f7d2bac074b8e00b0fac273552bf72ad80e87dd4b69b9f75d446" gracePeriod=10 Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.456723 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" (UID: "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.458471 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-config-data" (OuterVolumeSpecName: "config-data") pod "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" (UID: "e3e2c901-48dd-481f-b206-e4cfd5ec0e3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.498522 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jlz8\" (UniqueName: \"kubernetes.io/projected/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-kube-api-access-6jlz8\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.499096 4904 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.499239 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.499336 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.499460 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:27 crc kubenswrapper[4904]: I1121 13:55:27.499554 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.167436 4904 generic.go:334] "Generic (PLEG): container finished" podID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerID="e8157fcf7931f7d2bac074b8e00b0fac273552bf72ad80e87dd4b69b9f75d446" exitCode=0 Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.167527 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8qd6n" event={"ID":"6e1635ad-9fa5-4e0c-8efd-c6127723feba","Type":"ContainerDied","Data":"e8157fcf7931f7d2bac074b8e00b0fac273552bf72ad80e87dd4b69b9f75d446"} Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.172488 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h5pb2" event={"ID":"e3e2c901-48dd-481f-b206-e4cfd5ec0e3c","Type":"ContainerDied","Data":"e4bcf30cc6e56d77b31c5b2bcd85d4d32eca7e4d851c1e2b0d10191d77174dac"} Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.172521 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bcf30cc6e56d77b31c5b2bcd85d4d32eca7e4d851c1e2b0d10191d77174dac" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.172559 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h5pb2" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.454844 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h5pb2"] Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.464140 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h5pb2"] Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.537514 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" path="/var/lib/kubelet/pods/e3e2c901-48dd-481f-b206-e4cfd5ec0e3c/volumes" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.636830 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4hqqk"] Nov 21 13:55:28 crc kubenswrapper[4904]: E1121 13:55:28.637246 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerName="init" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.637269 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerName="init" Nov 21 13:55:28 crc kubenswrapper[4904]: E1121 13:55:28.637279 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" containerName="keystone-bootstrap" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.637287 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" containerName="keystone-bootstrap" Nov 21 13:55:28 crc kubenswrapper[4904]: E1121 13:55:28.637301 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerName="dnsmasq-dns" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.637309 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerName="dnsmasq-dns" Nov 21 13:55:28 crc kubenswrapper[4904]: E1121 13:55:28.637373 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" containerName="init" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.637380 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" containerName="init" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.640005 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e2c901-48dd-481f-b206-e4cfd5ec0e3c" containerName="keystone-bootstrap" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.640081 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e799bc1-49c9-425e-a256-532bb31f2d58" containerName="dnsmasq-dns" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.640094 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ccb687-9ef0-4a6b-8f9b-97d7c7e69a5d" containerName="init" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.640841 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4hqqk"] Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.640972 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.644730 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.644741 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.645259 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.645370 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fbg72" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.647436 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.742852 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-scripts\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.742941 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-config-data\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.743035 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-fernet-keys\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.743065 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm6l\" (UniqueName: \"kubernetes.io/projected/a9067c4e-8264-43c2-b86a-7950915f1a31-kube-api-access-nrm6l\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.743140 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-combined-ca-bundle\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.743160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-credential-keys\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.845178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-scripts\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.845246 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-config-data\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.845311 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-fernet-keys\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.845338 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm6l\" (UniqueName: \"kubernetes.io/projected/a9067c4e-8264-43c2-b86a-7950915f1a31-kube-api-access-nrm6l\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.845399 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-combined-ca-bundle\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.845416 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-credential-keys\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.850631 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-credential-keys\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.852350 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-scripts\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.860889 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-config-data\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.863706 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-combined-ca-bundle\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.865506 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-fernet-keys\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.870510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm6l\" (UniqueName: \"kubernetes.io/projected/a9067c4e-8264-43c2-b86a-7950915f1a31-kube-api-access-nrm6l\") pod \"keystone-bootstrap-4hqqk\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:28 crc kubenswrapper[4904]: I1121 13:55:28.973255 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:29 crc kubenswrapper[4904]: I1121 13:55:29.828256 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-8qd6n" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 21 13:55:32 crc kubenswrapper[4904]: E1121 13:55:32.431144 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 21 13:55:32 crc kubenswrapper[4904]: E1121 13:55:32.432267 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jv5c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2s659_openstack(20b427a9-510f-45b1-82bc-cf85bb44932b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:32 crc kubenswrapper[4904]: E1121 13:55:32.433502 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2s659" podUID="20b427a9-510f-45b1-82bc-cf85bb44932b" Nov 21 13:55:33 crc kubenswrapper[4904]: E1121 13:55:33.247359 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2s659" podUID="20b427a9-510f-45b1-82bc-cf85bb44932b" Nov 21 13:55:34 crc kubenswrapper[4904]: I1121 13:55:34.828285 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-8qd6n" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.088823 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.089060 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scdpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lj7p5_openstack(6201ad46-9eaf-4b17-b40f-e31756dea737): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.090317 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lj7p5" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.288237 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lj7p5" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.419889 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.420133 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6qml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jpqkx_openstack(517c919d-5531-457e-ae70-aa39ea0282a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:37 crc kubenswrapper[4904]: E1121 13:55:37.421242 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-jpqkx" podUID="517c919d-5531-457e-ae70-aa39ea0282a9" Nov 21 13:55:38 crc kubenswrapper[4904]: E1121 13:55:38.305153 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-jpqkx" podUID="517c919d-5531-457e-ae70-aa39ea0282a9" Nov 21 13:55:39 crc kubenswrapper[4904]: I1121 13:55:39.827877 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-8qd6n" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 21 13:55:39 crc kubenswrapper[4904]: I1121 13:55:39.828588 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.303912 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dn4kl"] Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.307031 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.311642 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn4kl"] Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.425801 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxh4\" (UniqueName: \"kubernetes.io/projected/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-kube-api-access-rnxh4\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.425986 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-catalog-content\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.426017 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-utilities\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.529116 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxh4\" (UniqueName: \"kubernetes.io/projected/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-kube-api-access-rnxh4\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.529266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-catalog-content\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.529305 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-utilities\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.533859 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-catalog-content\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.534010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-utilities\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.574748 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxh4\" (UniqueName: \"kubernetes.io/projected/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-kube-api-access-rnxh4\") pod \"community-operators-dn4kl\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.642405 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:55:44 crc kubenswrapper[4904]: I1121 13:55:44.828070 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-8qd6n" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 21 13:55:46 crc kubenswrapper[4904]: E1121 13:55:46.488980 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 21 13:55:46 crc kubenswrapper[4904]: E1121 13:55:46.490262 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-874vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qg6bd_openstack(46d63715-407d-4908-a38f-5f6fd76729db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:55:46 crc kubenswrapper[4904]: E1121 13:55:46.491498 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qg6bd" podUID="46d63715-407d-4908-a38f-5f6fd76729db" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.643430 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.782716 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-sb\") pod \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.782769 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-config\") pod \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.782806 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-dns-svc\") pod \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.782905 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jd2n\" (UniqueName: \"kubernetes.io/projected/6e1635ad-9fa5-4e0c-8efd-c6127723feba-kube-api-access-8jd2n\") pod \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.782959 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-nb\") pod \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\" (UID: \"6e1635ad-9fa5-4e0c-8efd-c6127723feba\") " Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.790958 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1635ad-9fa5-4e0c-8efd-c6127723feba-kube-api-access-8jd2n" (OuterVolumeSpecName: "kube-api-access-8jd2n") pod "6e1635ad-9fa5-4e0c-8efd-c6127723feba" (UID: "6e1635ad-9fa5-4e0c-8efd-c6127723feba"). InnerVolumeSpecName "kube-api-access-8jd2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.835877 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e1635ad-9fa5-4e0c-8efd-c6127723feba" (UID: "6e1635ad-9fa5-4e0c-8efd-c6127723feba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.845221 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-config" (OuterVolumeSpecName: "config") pod "6e1635ad-9fa5-4e0c-8efd-c6127723feba" (UID: "6e1635ad-9fa5-4e0c-8efd-c6127723feba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.845242 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e1635ad-9fa5-4e0c-8efd-c6127723feba" (UID: "6e1635ad-9fa5-4e0c-8efd-c6127723feba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.861523 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e1635ad-9fa5-4e0c-8efd-c6127723feba" (UID: "6e1635ad-9fa5-4e0c-8efd-c6127723feba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.886746 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.886785 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.886794 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.886805 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jd2n\" (UniqueName: \"kubernetes.io/projected/6e1635ad-9fa5-4e0c-8efd-c6127723feba-kube-api-access-8jd2n\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.886816 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e1635ad-9fa5-4e0c-8efd-c6127723feba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:46 crc kubenswrapper[4904]: I1121 13:55:46.998091 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4hqqk"] Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.006822 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn4kl"] Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.406399 4904 generic.go:334] "Generic (PLEG): container finished" podID="cf7baa95-27e6-491e-a934-d79a287ca62d" containerID="417542c8dded4121e13e726802629777aa7fb90706ee8f4936852ab2a729dea7" exitCode=0 Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.406486 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6plvv" event={"ID":"cf7baa95-27e6-491e-a934-d79a287ca62d","Type":"ContainerDied","Data":"417542c8dded4121e13e726802629777aa7fb90706ee8f4936852ab2a729dea7"} Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.411883 4904 generic.go:334] "Generic (PLEG): container finished" podID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerID="eaf6448e9789997bdd02812683f4225252f21df94020bd88e72671f5cbdd2453" exitCode=0 Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.412122 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn4kl" event={"ID":"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a","Type":"ContainerDied","Data":"eaf6448e9789997bdd02812683f4225252f21df94020bd88e72671f5cbdd2453"} Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.413092 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn4kl" event={"ID":"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a","Type":"ContainerStarted","Data":"be4cf993fe47d829464f1a3e13f5906a46c7bbd1f63497605981dc36675d7a75"} Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.417324 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4hqqk" event={"ID":"a9067c4e-8264-43c2-b86a-7950915f1a31","Type":"ContainerStarted","Data":"1c243466d3d8eb830ee5b9a97e928c7aa949fcb9dc08023366b3f52f50a398f3"} Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.417378 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4hqqk" event={"ID":"a9067c4e-8264-43c2-b86a-7950915f1a31","Type":"ContainerStarted","Data":"dbc6ffe5c579b4be45fcd68f7a0447e7d2649a1d28e983643231c14d23827200"} Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.433068 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-8qd6n" event={"ID":"6e1635ad-9fa5-4e0c-8efd-c6127723feba","Type":"ContainerDied","Data":"4261c93f5804b6cdb1311e81321d80ab30d2b0191dcc1429fa0a0d4e41a085c4"} Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.433565 4904 scope.go:117] "RemoveContainer" containerID="e8157fcf7931f7d2bac074b8e00b0fac273552bf72ad80e87dd4b69b9f75d446" Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.433109 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-8qd6n" Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.444876 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerStarted","Data":"feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c"} Nov 21 13:55:47 crc kubenswrapper[4904]: E1121 13:55:47.448586 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qg6bd" podUID="46d63715-407d-4908-a38f-5f6fd76729db" Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.485197 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4hqqk" podStartSLOduration=19.485178885 podStartE2EDuration="19.485178885s" podCreationTimestamp="2025-11-21 13:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:55:47.479532898 +0000 UTC m=+1421.601065450" watchObservedRunningTime="2025-11-21 13:55:47.485178885 +0000 UTC m=+1421.606711437" Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.539233 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8qd6n"] Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.547549 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-8qd6n"] Nov 21 13:55:47 crc kubenswrapper[4904]: I1121 13:55:47.745282 4904 scope.go:117] "RemoveContainer" containerID="f8ff95a905b3f2f3400f3c975d56b2199481c904baf836e7f16c5ba5503518a6" Nov 21 13:55:48 crc kubenswrapper[4904]: I1121 13:55:48.462025 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerStarted","Data":"a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada"} Nov 21 13:55:48 crc kubenswrapper[4904]: I1121 13:55:48.535074 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" path="/var/lib/kubelet/pods/6e1635ad-9fa5-4e0c-8efd-c6127723feba/volumes" Nov 21 13:55:48 crc kubenswrapper[4904]: I1121 13:55:48.937591 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.043629 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-combined-ca-bundle\") pod \"cf7baa95-27e6-491e-a934-d79a287ca62d\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.043803 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-config\") pod \"cf7baa95-27e6-491e-a934-d79a287ca62d\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.043961 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zptmh\" (UniqueName: \"kubernetes.io/projected/cf7baa95-27e6-491e-a934-d79a287ca62d-kube-api-access-zptmh\") pod \"cf7baa95-27e6-491e-a934-d79a287ca62d\" (UID: \"cf7baa95-27e6-491e-a934-d79a287ca62d\") " Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.053049 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7baa95-27e6-491e-a934-d79a287ca62d-kube-api-access-zptmh" (OuterVolumeSpecName: "kube-api-access-zptmh") pod "cf7baa95-27e6-491e-a934-d79a287ca62d" (UID: "cf7baa95-27e6-491e-a934-d79a287ca62d"). InnerVolumeSpecName "kube-api-access-zptmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.084960 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-config" (OuterVolumeSpecName: "config") pod "cf7baa95-27e6-491e-a934-d79a287ca62d" (UID: "cf7baa95-27e6-491e-a934-d79a287ca62d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.085099 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf7baa95-27e6-491e-a934-d79a287ca62d" (UID: "cf7baa95-27e6-491e-a934-d79a287ca62d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.147164 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.147207 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf7baa95-27e6-491e-a934-d79a287ca62d-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.147221 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zptmh\" (UniqueName: \"kubernetes.io/projected/cf7baa95-27e6-491e-a934-d79a287ca62d-kube-api-access-zptmh\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.474816 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-6plvv" event={"ID":"cf7baa95-27e6-491e-a934-d79a287ca62d","Type":"ContainerDied","Data":"5cc5cc31d4f8ea6bcc181807800bb33a3b15731f3b0a3f21fba7ab0f374f1df9"} Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.474873 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc5cc31d4f8ea6bcc181807800bb33a3b15731f3b0a3f21fba7ab0f374f1df9" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.474851 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-6plvv" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.479167 4904 generic.go:334] "Generic (PLEG): container finished" podID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerID="e56270ac1d05f84eed08e305496d561732d93a124ea0a6493c88ed1e37160315" exitCode=0 Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.479216 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn4kl" event={"ID":"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a","Type":"ContainerDied","Data":"e56270ac1d05f84eed08e305496d561732d93a124ea0a6493c88ed1e37160315"} Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.878084 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-rjx2d"] Nov 21 13:55:49 crc kubenswrapper[4904]: E1121 13:55:49.879094 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7baa95-27e6-491e-a934-d79a287ca62d" containerName="neutron-db-sync" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.879110 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7baa95-27e6-491e-a934-d79a287ca62d" containerName="neutron-db-sync" Nov 21 13:55:49 crc kubenswrapper[4904]: E1121 13:55:49.879122 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.879128 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" Nov 21 13:55:49 crc kubenswrapper[4904]: E1121 13:55:49.879142 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="init" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.879149 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="init" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.879337 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7baa95-27e6-491e-a934-d79a287ca62d" containerName="neutron-db-sync" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.879359 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1635ad-9fa5-4e0c-8efd-c6127723feba" containerName="dnsmasq-dns" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.880538 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.904624 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-rjx2d"] Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.969294 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.969354 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6kdg\" (UniqueName: \"kubernetes.io/projected/0acfb4a5-5122-4f88-885f-7e255f82c2a1-kube-api-access-n6kdg\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.969442 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.969492 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.969517 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-config\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:49 crc kubenswrapper[4904]: I1121 13:55:49.969536 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.076792 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.077547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.077624 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-config\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.077655 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.077894 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.077980 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.078028 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6kdg\" (UniqueName: \"kubernetes.io/projected/0acfb4a5-5122-4f88-885f-7e255f82c2a1-kube-api-access-n6kdg\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.078536 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-svc\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.078712 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.079133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.079291 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-config\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.134734 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6kdg\" (UniqueName: \"kubernetes.io/projected/0acfb4a5-5122-4f88-885f-7e255f82c2a1-kube-api-access-n6kdg\") pod \"dnsmasq-dns-55f844cf75-rjx2d\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.176581 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dc7665556-sgmcf"] Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.178504 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.184502 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.184932 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-t9vqr" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.185087 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.185205 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.203248 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc7665556-sgmcf"] Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.221725 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.282533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-ovndb-tls-certs\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.282976 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-config\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.283061 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvz9\" (UniqueName: \"kubernetes.io/projected/c33d7171-905b-4832-8c9c-f0a05b77fde4-kube-api-access-ddvz9\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.283143 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-combined-ca-bundle\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.283226 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-httpd-config\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.385229 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-ovndb-tls-certs\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.385345 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-config\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.385372 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvz9\" (UniqueName: \"kubernetes.io/projected/c33d7171-905b-4832-8c9c-f0a05b77fde4-kube-api-access-ddvz9\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.385459 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-combined-ca-bundle\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.385496 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-httpd-config\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.391215 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-combined-ca-bundle\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.393458 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-ovndb-tls-certs\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.393730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-config\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.400412 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-httpd-config\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.407254 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvz9\" (UniqueName: \"kubernetes.io/projected/c33d7171-905b-4832-8c9c-f0a05b77fde4-kube-api-access-ddvz9\") pod \"neutron-dc7665556-sgmcf\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:50 crc kubenswrapper[4904]: I1121 13:55:50.524441 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.751059 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4bgqc"] Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.753901 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.760920 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bgqc"] Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.826373 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-catalog-content\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.826481 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-utilities\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.826661 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pbz\" (UniqueName: \"kubernetes.io/projected/eb8abf79-d309-4440-9815-0ebcb93b9312-kube-api-access-r2pbz\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.870870 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bc5bc9bb5-xls6r"] Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.875219 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.879491 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.880044 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.892208 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bc5bc9bb5-xls6r"] Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.929877 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-catalog-content\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.929980 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-utilities\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.930053 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2pbz\" (UniqueName: \"kubernetes.io/projected/eb8abf79-d309-4440-9815-0ebcb93b9312-kube-api-access-r2pbz\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.931070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-catalog-content\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.931297 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-utilities\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:51 crc kubenswrapper[4904]: I1121 13:55:51.979850 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2pbz\" (UniqueName: \"kubernetes.io/projected/eb8abf79-d309-4440-9815-0ebcb93b9312-kube-api-access-r2pbz\") pod \"certified-operators-4bgqc\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032071 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vh6\" (UniqueName: \"kubernetes.io/projected/9814cd75-e32a-40e2-9509-08d8256ee1c7-kube-api-access-78vh6\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-combined-ca-bundle\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032238 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-ovndb-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032311 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-httpd-config\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032343 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-internal-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032363 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-public-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.032415 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-config\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.082882 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vh6\" (UniqueName: \"kubernetes.io/projected/9814cd75-e32a-40e2-9509-08d8256ee1c7-kube-api-access-78vh6\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134196 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-combined-ca-bundle\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134259 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-ovndb-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134343 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-httpd-config\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134374 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-internal-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134395 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-public-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.134443 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-config\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.139035 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-config\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.139050 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-ovndb-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.140439 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-httpd-config\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.140972 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-internal-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.144551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-combined-ca-bundle\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.147484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9814cd75-e32a-40e2-9509-08d8256ee1c7-public-tls-certs\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.154993 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vh6\" (UniqueName: \"kubernetes.io/projected/9814cd75-e32a-40e2-9509-08d8256ee1c7-kube-api-access-78vh6\") pod \"neutron-5bc5bc9bb5-xls6r\" (UID: \"9814cd75-e32a-40e2-9509-08d8256ee1c7\") " pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.214885 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.523277 4904 generic.go:334] "Generic (PLEG): container finished" podID="a9067c4e-8264-43c2-b86a-7950915f1a31" containerID="1c243466d3d8eb830ee5b9a97e928c7aa949fcb9dc08023366b3f52f50a398f3" exitCode=0 Nov 21 13:55:52 crc kubenswrapper[4904]: I1121 13:55:52.528021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4hqqk" event={"ID":"a9067c4e-8264-43c2-b86a-7950915f1a31","Type":"ContainerDied","Data":"1c243466d3d8eb830ee5b9a97e928c7aa949fcb9dc08023366b3f52f50a398f3"} Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.760444 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.901830 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm6l\" (UniqueName: \"kubernetes.io/projected/a9067c4e-8264-43c2-b86a-7950915f1a31-kube-api-access-nrm6l\") pod \"a9067c4e-8264-43c2-b86a-7950915f1a31\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.902006 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-combined-ca-bundle\") pod \"a9067c4e-8264-43c2-b86a-7950915f1a31\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.902041 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-fernet-keys\") pod \"a9067c4e-8264-43c2-b86a-7950915f1a31\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.902159 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-credential-keys\") pod \"a9067c4e-8264-43c2-b86a-7950915f1a31\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.902249 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-config-data\") pod \"a9067c4e-8264-43c2-b86a-7950915f1a31\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.902384 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-scripts\") pod \"a9067c4e-8264-43c2-b86a-7950915f1a31\" (UID: \"a9067c4e-8264-43c2-b86a-7950915f1a31\") " Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.909964 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-scripts" (OuterVolumeSpecName: "scripts") pod "a9067c4e-8264-43c2-b86a-7950915f1a31" (UID: "a9067c4e-8264-43c2-b86a-7950915f1a31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.914427 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a9067c4e-8264-43c2-b86a-7950915f1a31" (UID: "a9067c4e-8264-43c2-b86a-7950915f1a31"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.915279 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9067c4e-8264-43c2-b86a-7950915f1a31-kube-api-access-nrm6l" (OuterVolumeSpecName: "kube-api-access-nrm6l") pod "a9067c4e-8264-43c2-b86a-7950915f1a31" (UID: "a9067c4e-8264-43c2-b86a-7950915f1a31"). InnerVolumeSpecName "kube-api-access-nrm6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.922290 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a9067c4e-8264-43c2-b86a-7950915f1a31" (UID: "a9067c4e-8264-43c2-b86a-7950915f1a31"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.959383 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-config-data" (OuterVolumeSpecName: "config-data") pod "a9067c4e-8264-43c2-b86a-7950915f1a31" (UID: "a9067c4e-8264-43c2-b86a-7950915f1a31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:54 crc kubenswrapper[4904]: I1121 13:55:54.982520 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9067c4e-8264-43c2-b86a-7950915f1a31" (UID: "a9067c4e-8264-43c2-b86a-7950915f1a31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.004926 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm6l\" (UniqueName: \"kubernetes.io/projected/a9067c4e-8264-43c2-b86a-7950915f1a31-kube-api-access-nrm6l\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.004966 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.004975 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.004987 4904 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.004996 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.005004 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9067c4e-8264-43c2-b86a-7950915f1a31-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.581537 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4hqqk" event={"ID":"a9067c4e-8264-43c2-b86a-7950915f1a31","Type":"ContainerDied","Data":"dbc6ffe5c579b4be45fcd68f7a0447e7d2649a1d28e983643231c14d23827200"} Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.582403 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc6ffe5c579b4be45fcd68f7a0447e7d2649a1d28e983643231c14d23827200" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.581616 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4hqqk" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.995354 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b64bb974-stp6c"] Nov 21 13:55:55 crc kubenswrapper[4904]: E1121 13:55:55.996047 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9067c4e-8264-43c2-b86a-7950915f1a31" containerName="keystone-bootstrap" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.996061 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9067c4e-8264-43c2-b86a-7950915f1a31" containerName="keystone-bootstrap" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.996239 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9067c4e-8264-43c2-b86a-7950915f1a31" containerName="keystone-bootstrap" Nov 21 13:55:55 crc kubenswrapper[4904]: I1121 13:55:55.997193 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.004287 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.004702 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.004869 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.005262 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.009230 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.017887 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-fbg72" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.032297 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b64bb974-stp6c"] Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-fernet-keys\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033421 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-scripts\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033466 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-internal-tls-certs\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-credential-keys\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033576 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-config-data\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033624 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49h4h\" (UniqueName: \"kubernetes.io/projected/9b547005-5eef-4c9a-91a0-7d796e269d05-kube-api-access-49h4h\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033718 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-combined-ca-bundle\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.033758 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-public-tls-certs\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.135917 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-credential-keys\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.135987 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-config-data\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.136035 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49h4h\" (UniqueName: \"kubernetes.io/projected/9b547005-5eef-4c9a-91a0-7d796e269d05-kube-api-access-49h4h\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.136083 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-combined-ca-bundle\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.136107 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-public-tls-certs\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.136146 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-fernet-keys\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.136194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-scripts\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.136226 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-internal-tls-certs\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.147009 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-public-tls-certs\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.153648 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-config-data\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.155021 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-scripts\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.155213 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-internal-tls-certs\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.155743 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-combined-ca-bundle\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.170926 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-credential-keys\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.171725 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9b547005-5eef-4c9a-91a0-7d796e269d05-fernet-keys\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.179423 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49h4h\" (UniqueName: \"kubernetes.io/projected/9b547005-5eef-4c9a-91a0-7d796e269d05-kube-api-access-49h4h\") pod \"keystone-b64bb974-stp6c\" (UID: \"9b547005-5eef-4c9a-91a0-7d796e269d05\") " pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:56 crc kubenswrapper[4904]: I1121 13:55:56.322085 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:55:59 crc kubenswrapper[4904]: E1121 13:55:59.711499 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: reading manifest sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Nov 21 13:55:59 crc kubenswrapper[4904]: E1121 13:55:59.712761 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:120MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{125829120 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rnxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dn4kl_openshift-marketplace(713cdb85-aeb5-46c2-9fe0-bed76d06dc9a): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: reading manifest sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Nov 21 13:55:59 crc kubenswrapper[4904]: E1121 13:55:59.714363 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: reading manifest sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad in quay.io/openshift-release-dev/ocp-v4.0-art-dev: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/community-operators-dn4kl" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" Nov 21 13:56:05 crc kubenswrapper[4904]: E1121 13:56:05.208384 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/community-operators-dn4kl" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" Nov 21 13:56:05 crc kubenswrapper[4904]: E1121 13:56:05.264186 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Nov 21 13:56:05 crc kubenswrapper[4904]: E1121 13:56:05.264472 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4mvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a896cd91-f67c-48a4-8791-53dc03090b28): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:56:05 crc kubenswrapper[4904]: E1121 13:56:05.312328 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 21 13:56:05 crc kubenswrapper[4904]: E1121 13:56:05.313050 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scdpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lj7p5_openstack(6201ad46-9eaf-4b17-b40f-e31756dea737): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:56:05 crc kubenswrapper[4904]: E1121 13:56:05.314320 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lj7p5" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" Nov 21 13:56:05 crc kubenswrapper[4904]: I1121 13:56:05.755362 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jpqkx" event={"ID":"517c919d-5531-457e-ae70-aa39ea0282a9","Type":"ContainerStarted","Data":"4d3df7d6bc827d40e499f0a77349b33a92fe2707bd35c365cbdad70cc5b42536"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.051602 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jpqkx" podStartSLOduration=2.674440959 podStartE2EDuration="50.051582241s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="2025-11-21 13:55:17.862607271 +0000 UTC m=+1391.984139823" lastFinishedPulling="2025-11-21 13:56:05.239748543 +0000 UTC m=+1439.361281105" observedRunningTime="2025-11-21 13:56:05.789181698 +0000 UTC m=+1439.910714250" watchObservedRunningTime="2025-11-21 13:56:06.051582241 +0000 UTC m=+1440.173114793" Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.070035 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4bgqc"] Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.137051 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dc7665556-sgmcf"] Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.221390 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bc5bc9bb5-xls6r"] Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.244128 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-rjx2d"] Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.401527 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b64bb974-stp6c"] Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.766787 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerID="428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302" exitCode=0 Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.767425 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerDied","Data":"428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.767472 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerStarted","Data":"5b6aec42214736a6eaf8b679dd83dffcc20d8dbc49cc8d2ff4efc146a555037a"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.769740 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b64bb974-stp6c" event={"ID":"9b547005-5eef-4c9a-91a0-7d796e269d05","Type":"ContainerStarted","Data":"efd5783de8c069b3553de1b53deacfcfc49573e3cf4d7e5d68c92d9358f41eac"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.778382 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2s659" event={"ID":"20b427a9-510f-45b1-82bc-cf85bb44932b","Type":"ContainerStarted","Data":"65c3cd28e63e9c5e610aef68f4a79be6a81f554865470aaf05ac99f6f863d3d7"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.779854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc7665556-sgmcf" event={"ID":"c33d7171-905b-4832-8c9c-f0a05b77fde4","Type":"ContainerStarted","Data":"8550dafa71d95ac13ebbc7b305184e6d87001d168e4e763758715b8c3d3b9c0e"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.780983 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" event={"ID":"0acfb4a5-5122-4f88-885f-7e255f82c2a1","Type":"ContainerStarted","Data":"40ab0a48b81191c3aefa0a279442a76765866bb4fcd3f4695f14de40df64a363"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.782988 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bc5bc9bb5-xls6r" event={"ID":"9814cd75-e32a-40e2-9509-08d8256ee1c7","Type":"ContainerStarted","Data":"09ea4d4185bb4ac4530bdce35fc9e4694f96f5e89fe47e31f54fbb601425f8af"} Nov 21 13:56:06 crc kubenswrapper[4904]: I1121 13:56:06.823191 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2s659" podStartSLOduration=3.791898704 podStartE2EDuration="50.823171489s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="2025-11-21 13:55:18.177156854 +0000 UTC m=+1392.298689406" lastFinishedPulling="2025-11-21 13:56:05.208429629 +0000 UTC m=+1439.329962191" observedRunningTime="2025-11-21 13:56:06.816954357 +0000 UTC m=+1440.938486929" watchObservedRunningTime="2025-11-21 13:56:06.823171489 +0000 UTC m=+1440.944704041" Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.796974 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bc5bc9bb5-xls6r" event={"ID":"9814cd75-e32a-40e2-9509-08d8256ee1c7","Type":"ContainerStarted","Data":"f7755ab8ca5a1cd5c2908681f4d11900b4c2d4c47cb3c5654cc433c09c824e25"} Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.797501 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.797520 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bc5bc9bb5-xls6r" event={"ID":"9814cd75-e32a-40e2-9509-08d8256ee1c7","Type":"ContainerStarted","Data":"81a3b4fc1505375eaa1ea9003cd5005c40d73d46e5c7e7bc8923bdac4ebb2549"} Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.800770 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b64bb974-stp6c" event={"ID":"9b547005-5eef-4c9a-91a0-7d796e269d05","Type":"ContainerStarted","Data":"ddbad6392ceb9c74eb1db8bc374cff8a1eab3f7ce2fe704c52af7bf65f165972"} Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.801063 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.803191 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc7665556-sgmcf" event={"ID":"c33d7171-905b-4832-8c9c-f0a05b77fde4","Type":"ContainerStarted","Data":"cd79e26b239ca7f5275e1ca2d56bb42cd402ece4f629dd476ba443895eb8fcc8"} Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.803251 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc7665556-sgmcf" event={"ID":"c33d7171-905b-4832-8c9c-f0a05b77fde4","Type":"ContainerStarted","Data":"ee30ae8fa3265288b5166789e0c11461d588634adaf461db976121e0a6307b02"} Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.803449 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.804834 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerID="ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca" exitCode=0 Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.804884 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" event={"ID":"0acfb4a5-5122-4f88-885f-7e255f82c2a1","Type":"ContainerDied","Data":"ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca"} Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.852961 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bc5bc9bb5-xls6r" podStartSLOduration=16.852936236 podStartE2EDuration="16.852936236s" podCreationTimestamp="2025-11-21 13:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:07.817565195 +0000 UTC m=+1441.939097747" watchObservedRunningTime="2025-11-21 13:56:07.852936236 +0000 UTC m=+1441.974468788" Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.868571 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dc7665556-sgmcf" podStartSLOduration=17.868551937 podStartE2EDuration="17.868551937s" podCreationTimestamp="2025-11-21 13:55:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:07.856735179 +0000 UTC m=+1441.978267741" watchObservedRunningTime="2025-11-21 13:56:07.868551937 +0000 UTC m=+1441.990084489" Nov 21 13:56:07 crc kubenswrapper[4904]: I1121 13:56:07.914919 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b64bb974-stp6c" podStartSLOduration=12.914882665 podStartE2EDuration="12.914882665s" podCreationTimestamp="2025-11-21 13:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:07.900427913 +0000 UTC m=+1442.021960475" watchObservedRunningTime="2025-11-21 13:56:07.914882665 +0000 UTC m=+1442.036415217" Nov 21 13:56:08 crc kubenswrapper[4904]: I1121 13:56:08.826759 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerStarted","Data":"7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd"} Nov 21 13:56:08 crc kubenswrapper[4904]: I1121 13:56:08.832046 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" event={"ID":"0acfb4a5-5122-4f88-885f-7e255f82c2a1","Type":"ContainerStarted","Data":"361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97"} Nov 21 13:56:08 crc kubenswrapper[4904]: I1121 13:56:08.832634 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:56:08 crc kubenswrapper[4904]: I1121 13:56:08.883721 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" podStartSLOduration=19.883703238 podStartE2EDuration="19.883703238s" podCreationTimestamp="2025-11-21 13:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:08.877214281 +0000 UTC m=+1442.998746843" watchObservedRunningTime="2025-11-21 13:56:08.883703238 +0000 UTC m=+1443.005235780" Nov 21 13:56:09 crc kubenswrapper[4904]: E1121 13:56:09.645691 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 21 13:56:09 crc kubenswrapper[4904]: E1121 13:56:09.645974 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-874vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qg6bd_openstack(46d63715-407d-4908-a38f-5f6fd76729db): ErrImagePull: initializing source docker://quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" logger="UnhandledError" Nov 21 13:56:09 crc kubenswrapper[4904]: E1121 13:56:09.647408 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"initializing source docker://quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)\"" pod="openstack/cinder-db-sync-qg6bd" podUID="46d63715-407d-4908-a38f-5f6fd76729db" Nov 21 13:56:09 crc kubenswrapper[4904]: I1121 13:56:09.874084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerDied","Data":"7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd"} Nov 21 13:56:09 crc kubenswrapper[4904]: I1121 13:56:09.874193 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerID="7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd" exitCode=0 Nov 21 13:56:11 crc kubenswrapper[4904]: I1121 13:56:11.911271 4904 generic.go:334] "Generic (PLEG): container finished" podID="20b427a9-510f-45b1-82bc-cf85bb44932b" containerID="65c3cd28e63e9c5e610aef68f4a79be6a81f554865470aaf05ac99f6f863d3d7" exitCode=0 Nov 21 13:56:11 crc kubenswrapper[4904]: I1121 13:56:11.911342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2s659" event={"ID":"20b427a9-510f-45b1-82bc-cf85bb44932b","Type":"ContainerDied","Data":"65c3cd28e63e9c5e610aef68f4a79be6a81f554865470aaf05ac99f6f863d3d7"} Nov 21 13:56:13 crc kubenswrapper[4904]: I1121 13:56:13.940856 4904 generic.go:334] "Generic (PLEG): container finished" podID="517c919d-5531-457e-ae70-aa39ea0282a9" containerID="4d3df7d6bc827d40e499f0a77349b33a92fe2707bd35c365cbdad70cc5b42536" exitCode=0 Nov 21 13:56:13 crc kubenswrapper[4904]: I1121 13:56:13.940959 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jpqkx" event={"ID":"517c919d-5531-457e-ae70-aa39ea0282a9","Type":"ContainerDied","Data":"4d3df7d6bc827d40e499f0a77349b33a92fe2707bd35c365cbdad70cc5b42536"} Nov 21 13:56:14 crc kubenswrapper[4904]: I1121 13:56:14.981396 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2s659" event={"ID":"20b427a9-510f-45b1-82bc-cf85bb44932b","Type":"ContainerDied","Data":"1de828bfb005e23979262d50bbc1c4b63926725bbd0a0dc2a3bbfdda6da0bd78"} Nov 21 13:56:14 crc kubenswrapper[4904]: I1121 13:56:14.981447 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de828bfb005e23979262d50bbc1c4b63926725bbd0a0dc2a3bbfdda6da0bd78" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.039103 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2s659" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.119761 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-scripts\") pod \"20b427a9-510f-45b1-82bc-cf85bb44932b\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.119955 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-config-data\") pod \"20b427a9-510f-45b1-82bc-cf85bb44932b\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.120046 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b427a9-510f-45b1-82bc-cf85bb44932b-logs\") pod \"20b427a9-510f-45b1-82bc-cf85bb44932b\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.120175 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv5c9\" (UniqueName: \"kubernetes.io/projected/20b427a9-510f-45b1-82bc-cf85bb44932b-kube-api-access-jv5c9\") pod \"20b427a9-510f-45b1-82bc-cf85bb44932b\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.120207 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-combined-ca-bundle\") pod \"20b427a9-510f-45b1-82bc-cf85bb44932b\" (UID: \"20b427a9-510f-45b1-82bc-cf85bb44932b\") " Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.122747 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b427a9-510f-45b1-82bc-cf85bb44932b-logs" (OuterVolumeSpecName: "logs") pod "20b427a9-510f-45b1-82bc-cf85bb44932b" (UID: "20b427a9-510f-45b1-82bc-cf85bb44932b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.135227 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b427a9-510f-45b1-82bc-cf85bb44932b-kube-api-access-jv5c9" (OuterVolumeSpecName: "kube-api-access-jv5c9") pod "20b427a9-510f-45b1-82bc-cf85bb44932b" (UID: "20b427a9-510f-45b1-82bc-cf85bb44932b"). InnerVolumeSpecName "kube-api-access-jv5c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.148635 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-scripts" (OuterVolumeSpecName: "scripts") pod "20b427a9-510f-45b1-82bc-cf85bb44932b" (UID: "20b427a9-510f-45b1-82bc-cf85bb44932b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.161043 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-config-data" (OuterVolumeSpecName: "config-data") pod "20b427a9-510f-45b1-82bc-cf85bb44932b" (UID: "20b427a9-510f-45b1-82bc-cf85bb44932b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.168979 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20b427a9-510f-45b1-82bc-cf85bb44932b" (UID: "20b427a9-510f-45b1-82bc-cf85bb44932b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.223336 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv5c9\" (UniqueName: \"kubernetes.io/projected/20b427a9-510f-45b1-82bc-cf85bb44932b-kube-api-access-jv5c9\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.223376 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.223391 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.223406 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b427a9-510f-45b1-82bc-cf85bb44932b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.223840 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.225676 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b427a9-510f-45b1-82bc-cf85bb44932b-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.316367 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vmk6g"] Nov 21 13:56:15 crc kubenswrapper[4904]: I1121 13:56:15.316719 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerName="dnsmasq-dns" containerID="cri-o://c40f0c82366703d54d93db604dbaea6f236485ef33c5c853351c10651a5dadcd" gracePeriod=10 Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.014504 4904 generic.go:334] "Generic (PLEG): container finished" podID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerID="c40f0c82366703d54d93db604dbaea6f236485ef33c5c853351c10651a5dadcd" exitCode=0 Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.014645 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" event={"ID":"73cc8d35-c5fc-4ed7-a296-e738982614b5","Type":"ContainerDied","Data":"c40f0c82366703d54d93db604dbaea6f236485ef33c5c853351c10651a5dadcd"} Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.015027 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2s659" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.340751 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f758bd6b6-r5mk2"] Nov 21 13:56:16 crc kubenswrapper[4904]: E1121 13:56:16.341481 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b427a9-510f-45b1-82bc-cf85bb44932b" containerName="placement-db-sync" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.341508 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b427a9-510f-45b1-82bc-cf85bb44932b" containerName="placement-db-sync" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.341777 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b427a9-510f-45b1-82bc-cf85bb44932b" containerName="placement-db-sync" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.343374 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.346895 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.347290 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-t2ml2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.347437 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.347551 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.348324 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.366259 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f758bd6b6-r5mk2"] Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.416387 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jpqkx" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.467766 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-scripts\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.467872 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx66d\" (UniqueName: \"kubernetes.io/projected/9653a120-e74b-4330-8e8f-faf95b13f63e-kube-api-access-qx66d\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.467902 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-public-tls-certs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.467939 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9653a120-e74b-4330-8e8f-faf95b13f63e-logs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.468002 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-config-data\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.468139 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-internal-tls-certs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.468169 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-combined-ca-bundle\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.572565 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6qml\" (UniqueName: \"kubernetes.io/projected/517c919d-5531-457e-ae70-aa39ea0282a9-kube-api-access-g6qml\") pod \"517c919d-5531-457e-ae70-aa39ea0282a9\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.572739 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-combined-ca-bundle\") pod \"517c919d-5531-457e-ae70-aa39ea0282a9\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.572982 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-config-data\") pod \"517c919d-5531-457e-ae70-aa39ea0282a9\" (UID: \"517c919d-5531-457e-ae70-aa39ea0282a9\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.573631 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx66d\" (UniqueName: \"kubernetes.io/projected/9653a120-e74b-4330-8e8f-faf95b13f63e-kube-api-access-qx66d\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.573699 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-public-tls-certs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.573760 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9653a120-e74b-4330-8e8f-faf95b13f63e-logs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.573839 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-config-data\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.573941 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-internal-tls-certs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.573987 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-combined-ca-bundle\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.574075 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-scripts\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.584639 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-internal-tls-certs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.585019 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-config-data\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.585794 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-public-tls-certs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.587510 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/517c919d-5531-457e-ae70-aa39ea0282a9-kube-api-access-g6qml" (OuterVolumeSpecName: "kube-api-access-g6qml") pod "517c919d-5531-457e-ae70-aa39ea0282a9" (UID: "517c919d-5531-457e-ae70-aa39ea0282a9"). InnerVolumeSpecName "kube-api-access-g6qml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.589782 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-scripts\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.593460 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9653a120-e74b-4330-8e8f-faf95b13f63e-combined-ca-bundle\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.600912 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9653a120-e74b-4330-8e8f-faf95b13f63e-logs\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.606329 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx66d\" (UniqueName: \"kubernetes.io/projected/9653a120-e74b-4330-8e8f-faf95b13f63e-kube-api-access-qx66d\") pod \"placement-6f758bd6b6-r5mk2\" (UID: \"9653a120-e74b-4330-8e8f-faf95b13f63e\") " pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.668892 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "517c919d-5531-457e-ae70-aa39ea0282a9" (UID: "517c919d-5531-457e-ae70-aa39ea0282a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.678807 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6qml\" (UniqueName: \"kubernetes.io/projected/517c919d-5531-457e-ae70-aa39ea0282a9-kube-api-access-g6qml\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.678861 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.726018 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.741145 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.757452 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-config-data" (OuterVolumeSpecName: "config-data") pod "517c919d-5531-457e-ae70-aa39ea0282a9" (UID: "517c919d-5531-457e-ae70-aa39ea0282a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.781109 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/517c919d-5531-457e-ae70-aa39ea0282a9-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: E1121 13:56:16.789725 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.887981 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-sb\") pod \"73cc8d35-c5fc-4ed7-a296-e738982614b5\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.888105 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-svc\") pod \"73cc8d35-c5fc-4ed7-a296-e738982614b5\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.888248 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-swift-storage-0\") pod \"73cc8d35-c5fc-4ed7-a296-e738982614b5\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.888374 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcz2k\" (UniqueName: \"kubernetes.io/projected/73cc8d35-c5fc-4ed7-a296-e738982614b5-kube-api-access-kcz2k\") pod \"73cc8d35-c5fc-4ed7-a296-e738982614b5\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.888423 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-nb\") pod \"73cc8d35-c5fc-4ed7-a296-e738982614b5\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.888466 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-config\") pod \"73cc8d35-c5fc-4ed7-a296-e738982614b5\" (UID: \"73cc8d35-c5fc-4ed7-a296-e738982614b5\") " Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.898199 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cc8d35-c5fc-4ed7-a296-e738982614b5-kube-api-access-kcz2k" (OuterVolumeSpecName: "kube-api-access-kcz2k") pod "73cc8d35-c5fc-4ed7-a296-e738982614b5" (UID: "73cc8d35-c5fc-4ed7-a296-e738982614b5"). InnerVolumeSpecName "kube-api-access-kcz2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.948336 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-config" (OuterVolumeSpecName: "config") pod "73cc8d35-c5fc-4ed7-a296-e738982614b5" (UID: "73cc8d35-c5fc-4ed7-a296-e738982614b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.957301 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "73cc8d35-c5fc-4ed7-a296-e738982614b5" (UID: "73cc8d35-c5fc-4ed7-a296-e738982614b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.963044 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "73cc8d35-c5fc-4ed7-a296-e738982614b5" (UID: "73cc8d35-c5fc-4ed7-a296-e738982614b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.971751 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "73cc8d35-c5fc-4ed7-a296-e738982614b5" (UID: "73cc8d35-c5fc-4ed7-a296-e738982614b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.975414 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "73cc8d35-c5fc-4ed7-a296-e738982614b5" (UID: "73cc8d35-c5fc-4ed7-a296-e738982614b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.992766 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcz2k\" (UniqueName: \"kubernetes.io/projected/73cc8d35-c5fc-4ed7-a296-e738982614b5-kube-api-access-kcz2k\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.992812 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.992824 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.992839 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.992857 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:16 crc kubenswrapper[4904]: I1121 13:56:16.992869 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/73cc8d35-c5fc-4ed7-a296-e738982614b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.032152 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jpqkx" event={"ID":"517c919d-5531-457e-ae70-aa39ea0282a9","Type":"ContainerDied","Data":"91527cd6d4b9d33cbe0ed429468e41538d2457d5df4d903a419bcc454a55dfd0"} Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.032214 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91527cd6d4b9d33cbe0ed429468e41538d2457d5df4d903a419bcc454a55dfd0" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.032225 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jpqkx" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.035310 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerStarted","Data":"d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715"} Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.035485 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-central-agent" containerID="cri-o://feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c" gracePeriod=30 Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.035534 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="proxy-httpd" containerID="cri-o://d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715" gracePeriod=30 Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.035569 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.035627 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-notification-agent" containerID="cri-o://a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada" gracePeriod=30 Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.038860 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" event={"ID":"73cc8d35-c5fc-4ed7-a296-e738982614b5","Type":"ContainerDied","Data":"366d5f2e765bbe3a762db35376dbe05a8d1f4775ac2359a5aa0c71d4401c534d"} Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.038920 4904 scope.go:117] "RemoveContainer" containerID="c40f0c82366703d54d93db604dbaea6f236485ef33c5c853351c10651a5dadcd" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.039129 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-vmk6g" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.131702 4904 scope.go:117] "RemoveContainer" containerID="f256bc4a619a3ff5175a5e1501ce758ce4bccb4d61095d4ded51617bef5337d5" Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.153876 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vmk6g"] Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.156602 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-vmk6g"] Nov 21 13:56:17 crc kubenswrapper[4904]: I1121 13:56:17.313611 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f758bd6b6-r5mk2"] Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.054754 4904 generic.go:334] "Generic (PLEG): container finished" podID="a896cd91-f67c-48a4-8791-53dc03090b28" containerID="d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715" exitCode=0 Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.055175 4904 generic.go:334] "Generic (PLEG): container finished" podID="a896cd91-f67c-48a4-8791-53dc03090b28" containerID="feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c" exitCode=0 Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.054994 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerDied","Data":"d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715"} Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.055286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerDied","Data":"feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c"} Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.057305 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f758bd6b6-r5mk2" event={"ID":"9653a120-e74b-4330-8e8f-faf95b13f63e","Type":"ContainerStarted","Data":"283f1062fc70434739c4d9b1232267e3ac5b41eb147c4f1b2a132351a48fd7d5"} Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.057454 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f758bd6b6-r5mk2" event={"ID":"9653a120-e74b-4330-8e8f-faf95b13f63e","Type":"ContainerStarted","Data":"fc1aa6b6abf108a474f1029bd16ab2be58bcce11f7d45e03d6061db031b4872a"} Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.057609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f758bd6b6-r5mk2" event={"ID":"9653a120-e74b-4330-8e8f-faf95b13f63e","Type":"ContainerStarted","Data":"c51b22cfc4553062c2259b96e530e4fe3d5f578137de77bb0fc51b632190c9f2"} Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.057818 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.057945 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.098007 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f758bd6b6-r5mk2" podStartSLOduration=2.097970022 podStartE2EDuration="2.097970022s" podCreationTimestamp="2025-11-21 13:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:18.090218333 +0000 UTC m=+1452.211750925" watchObservedRunningTime="2025-11-21 13:56:18.097970022 +0000 UTC m=+1452.219502604" Nov 21 13:56:18 crc kubenswrapper[4904]: I1121 13:56:18.530331 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" path="/var/lib/kubelet/pods/73cc8d35-c5fc-4ed7-a296-e738982614b5/volumes" Nov 21 13:56:19 crc kubenswrapper[4904]: E1121 13:56:19.514995 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lj7p5" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" Nov 21 13:56:20 crc kubenswrapper[4904]: E1121 13:56:20.317769 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Nov 21 13:56:20 crc kubenswrapper[4904]: E1121 13:56:20.318494 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:40MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{41943040 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2pbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4bgqc_openshift-marketplace(eb8abf79-d309-4440-9815-0ebcb93b9312): ErrImagePull: parsing image configuration: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" logger="UnhandledError" Nov 21 13:56:20 crc kubenswrapper[4904]: E1121 13:56:20.320368 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"parsing image configuration: fetching blob: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-marketplace/certified-operators-4bgqc" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" Nov 21 13:56:20 crc kubenswrapper[4904]: E1121 13:56:20.517239 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qg6bd" podUID="46d63715-407d-4908-a38f-5f6fd76729db" Nov 21 13:56:20 crc kubenswrapper[4904]: I1121 13:56:20.550896 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:56:20 crc kubenswrapper[4904]: I1121 13:56:20.551375 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:56:20 crc kubenswrapper[4904]: I1121 13:56:20.551536 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:56:21 crc kubenswrapper[4904]: E1121 13:56:21.101203 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/certified-operators-4bgqc" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.823208 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.933680 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-run-httpd\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.933851 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-combined-ca-bundle\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.933945 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-log-httpd\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.933970 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-config-data\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.934009 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-scripts\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.934046 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4mvj\" (UniqueName: \"kubernetes.io/projected/a896cd91-f67c-48a4-8791-53dc03090b28-kube-api-access-g4mvj\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.934069 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-sg-core-conf-yaml\") pod \"a896cd91-f67c-48a4-8791-53dc03090b28\" (UID: \"a896cd91-f67c-48a4-8791-53dc03090b28\") " Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.934499 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.942903 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.943249 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.949493 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-scripts" (OuterVolumeSpecName: "scripts") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:21 crc kubenswrapper[4904]: I1121 13:56:21.954617 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a896cd91-f67c-48a4-8791-53dc03090b28-kube-api-access-g4mvj" (OuterVolumeSpecName: "kube-api-access-g4mvj") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "kube-api-access-g4mvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.040570 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.040632 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a896cd91-f67c-48a4-8791-53dc03090b28-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.040647 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.040680 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4mvj\" (UniqueName: \"kubernetes.io/projected/a896cd91-f67c-48a4-8791-53dc03090b28-kube-api-access-g4mvj\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.040694 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.069666 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.073278 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-config-data" (OuterVolumeSpecName: "config-data") pod "a896cd91-f67c-48a4-8791-53dc03090b28" (UID: "a896cd91-f67c-48a4-8791-53dc03090b28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.114284 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn4kl" event={"ID":"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a","Type":"ContainerStarted","Data":"5217bf07d7d164f945a878a49a2fe9600164dacc19f671ff28ee86cd7fb52054"} Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.120224 4904 generic.go:334] "Generic (PLEG): container finished" podID="a896cd91-f67c-48a4-8791-53dc03090b28" containerID="a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada" exitCode=0 Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.120283 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.120309 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerDied","Data":"a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada"} Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.120387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a896cd91-f67c-48a4-8791-53dc03090b28","Type":"ContainerDied","Data":"f1cdde6c8ce5f677d3a23e3f2d6f55ad34ac3d1d8ab4a61c392fefaf21512612"} Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.120442 4904 scope.go:117] "RemoveContainer" containerID="d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.143548 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.143594 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a896cd91-f67c-48a4-8791-53dc03090b28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.147802 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dn4kl" podStartSLOduration=4.918148557 podStartE2EDuration="38.147759915s" podCreationTimestamp="2025-11-21 13:55:44 +0000 UTC" firstStartedPulling="2025-11-21 13:55:47.72300884 +0000 UTC m=+1421.844541402" lastFinishedPulling="2025-11-21 13:56:20.952620208 +0000 UTC m=+1455.074152760" observedRunningTime="2025-11-21 13:56:22.135800193 +0000 UTC m=+1456.257332765" watchObservedRunningTime="2025-11-21 13:56:22.147759915 +0000 UTC m=+1456.269292497" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.195492 4904 scope.go:117] "RemoveContainer" containerID="a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.239501 4904 scope.go:117] "RemoveContainer" containerID="feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.267291 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.279058 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.287852 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.288496 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerName="dnsmasq-dns" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288528 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerName="dnsmasq-dns" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.288559 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="517c919d-5531-457e-ae70-aa39ea0282a9" containerName="heat-db-sync" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288570 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="517c919d-5531-457e-ae70-aa39ea0282a9" containerName="heat-db-sync" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.288587 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="proxy-httpd" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288597 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="proxy-httpd" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.288620 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerName="init" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288629 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerName="init" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.288650 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-notification-agent" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288677 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-notification-agent" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.288695 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-central-agent" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288703 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-central-agent" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288968 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="73cc8d35-c5fc-4ed7-a296-e738982614b5" containerName="dnsmasq-dns" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.288982 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-notification-agent" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.289003 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="proxy-httpd" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.289019 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" containerName="ceilometer-central-agent" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.289030 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="517c919d-5531-457e-ae70-aa39ea0282a9" containerName="heat-db-sync" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.291419 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.297479 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.306713 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.307017 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.310692 4904 scope.go:117] "RemoveContainer" containerID="d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.313125 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715\": container with ID starting with d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715 not found: ID does not exist" containerID="d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.313179 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715"} err="failed to get container status \"d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715\": rpc error: code = NotFound desc = could not find container \"d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715\": container with ID starting with d37fa2d2f86984d55fde1a14ab61f191914d3bdb94b68cfe5692b411d5360715 not found: ID does not exist" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.313210 4904 scope.go:117] "RemoveContainer" containerID="a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.314041 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada\": container with ID starting with a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada not found: ID does not exist" containerID="a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.314069 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada"} err="failed to get container status \"a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada\": rpc error: code = NotFound desc = could not find container \"a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada\": container with ID starting with a6393871c8663298ff55b3f1331c929f1d88c2b0e091552ee1abc7d1ee677ada not found: ID does not exist" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.314087 4904 scope.go:117] "RemoveContainer" containerID="feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c" Nov 21 13:56:22 crc kubenswrapper[4904]: E1121 13:56:22.315942 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c\": container with ID starting with feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c not found: ID does not exist" containerID="feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.316000 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c"} err="failed to get container status \"feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c\": rpc error: code = NotFound desc = could not find container \"feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c\": container with ID starting with feb3871a2845fe2e88d079b3e6d44d02572b8d6fad74c4338dd8c301ea1fb81c not found: ID does not exist" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.449588 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-log-httpd\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.449805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl75z\" (UniqueName: \"kubernetes.io/projected/b8aca765-8842-4314-a89a-403377c3d5be-kube-api-access-dl75z\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.450076 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.450361 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-scripts\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.450403 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-config-data\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.450671 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.450855 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-run-httpd\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.527065 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a896cd91-f67c-48a4-8791-53dc03090b28" path="/var/lib/kubelet/pods/a896cd91-f67c-48a4-8791-53dc03090b28/volumes" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552417 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl75z\" (UniqueName: \"kubernetes.io/projected/b8aca765-8842-4314-a89a-403377c3d5be-kube-api-access-dl75z\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552503 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552575 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-scripts\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552597 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-config-data\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552757 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552816 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-run-httpd\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.552862 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-log-httpd\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.553366 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-log-httpd\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.553475 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-run-httpd\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.558064 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.558485 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-config-data\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.559926 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-scripts\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.569576 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.584611 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl75z\" (UniqueName: \"kubernetes.io/projected/b8aca765-8842-4314-a89a-403377c3d5be-kube-api-access-dl75z\") pod \"ceilometer-0\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.628232 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.715484 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bc5bc9bb5-xls6r" Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.816242 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc7665556-sgmcf"] Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.816506 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-api" containerID="cri-o://ee30ae8fa3265288b5166789e0c11461d588634adaf461db976121e0a6307b02" gracePeriod=30 Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.817486 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" containerID="cri-o://cd79e26b239ca7f5275e1ca2d56bb42cd402ece4f629dd476ba443895eb8fcc8" gracePeriod=30 Nov 21 13:56:22 crc kubenswrapper[4904]: I1121 13:56:22.926959 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.180:9696/\": read tcp 10.217.0.2:41206->10.217.0.180:9696: read: connection reset by peer" Nov 21 13:56:23 crc kubenswrapper[4904]: I1121 13:56:23.138857 4904 generic.go:334] "Generic (PLEG): container finished" podID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerID="cd79e26b239ca7f5275e1ca2d56bb42cd402ece4f629dd476ba443895eb8fcc8" exitCode=0 Nov 21 13:56:23 crc kubenswrapper[4904]: I1121 13:56:23.138964 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc7665556-sgmcf" event={"ID":"c33d7171-905b-4832-8c9c-f0a05b77fde4","Type":"ContainerDied","Data":"cd79e26b239ca7f5275e1ca2d56bb42cd402ece4f629dd476ba443895eb8fcc8"} Nov 21 13:56:23 crc kubenswrapper[4904]: I1121 13:56:23.264548 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:23 crc kubenswrapper[4904]: W1121 13:56:23.272858 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8aca765_8842_4314_a89a_403377c3d5be.slice/crio-c369637602ab74f9e7dfdc10fc01ac49680d551695f16e99dd4bba456d4d830e WatchSource:0}: Error finding container c369637602ab74f9e7dfdc10fc01ac49680d551695f16e99dd4bba456d4d830e: Status 404 returned error can't find the container with id c369637602ab74f9e7dfdc10fc01ac49680d551695f16e99dd4bba456d4d830e Nov 21 13:56:24 crc kubenswrapper[4904]: I1121 13:56:24.153274 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerStarted","Data":"c369637602ab74f9e7dfdc10fc01ac49680d551695f16e99dd4bba456d4d830e"} Nov 21 13:56:24 crc kubenswrapper[4904]: I1121 13:56:24.644377 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:56:24 crc kubenswrapper[4904]: I1121 13:56:24.644943 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:56:25 crc kubenswrapper[4904]: I1121 13:56:25.167711 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerStarted","Data":"78f3ddcfc6799851f8926a2b46d04b80d24bd2d70f34004c4147f8ebbb4ef841"} Nov 21 13:56:25 crc kubenswrapper[4904]: I1121 13:56:25.709768 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dn4kl" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" probeResult="failure" output=< Nov 21 13:56:25 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:56:25 crc kubenswrapper[4904]: > Nov 21 13:56:26 crc kubenswrapper[4904]: I1121 13:56:26.201870 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerStarted","Data":"818c0a48f7100755b2b82190c5c33e8b1db45ca9da0cc93c0a5aec923c9c120b"} Nov 21 13:56:27 crc kubenswrapper[4904]: I1121 13:56:27.216329 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerStarted","Data":"dc04f4aa0e299e3fed96b66d972c4d5f54ab3c2030a5d2cfa133e88f421edda6"} Nov 21 13:56:28 crc kubenswrapper[4904]: I1121 13:56:28.114806 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:56:28 crc kubenswrapper[4904]: I1121 13:56:28.115212 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:56:29 crc kubenswrapper[4904]: I1121 13:56:29.169030 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b64bb974-stp6c" Nov 21 13:56:29 crc kubenswrapper[4904]: I1121 13:56:29.239456 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerStarted","Data":"46c261e86190cdd0a7f75cd07e30b75514747abc4bae205edf34da610c4a017a"} Nov 21 13:56:29 crc kubenswrapper[4904]: I1121 13:56:29.239811 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:56:31 crc kubenswrapper[4904]: I1121 13:56:31.528370 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.956096215 podStartE2EDuration="9.528343281s" podCreationTimestamp="2025-11-21 13:56:22 +0000 UTC" firstStartedPulling="2025-11-21 13:56:23.275912119 +0000 UTC m=+1457.397444671" lastFinishedPulling="2025-11-21 13:56:28.848159185 +0000 UTC m=+1462.969691737" observedRunningTime="2025-11-21 13:56:29.281701378 +0000 UTC m=+1463.403233930" watchObservedRunningTime="2025-11-21 13:56:31.528343281 +0000 UTC m=+1465.649875833" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.565080 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.567580 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.570203 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.570591 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.571029 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5j2w5" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.584508 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.698490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config-secret\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.698548 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.698618 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvrhv\" (UniqueName: \"kubernetes.io/projected/531ba4ef-f7e2-4398-b42c-76246ef4910d-kube-api-access-mvrhv\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.698737 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.800962 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvrhv\" (UniqueName: \"kubernetes.io/projected/531ba4ef-f7e2-4398-b42c-76246ef4910d-kube-api-access-mvrhv\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.801160 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.801238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config-secret\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.801277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.802425 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.811163 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config-secret\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.816796 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.820121 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvrhv\" (UniqueName: \"kubernetes.io/projected/531ba4ef-f7e2-4398-b42c-76246ef4910d-kube-api-access-mvrhv\") pod \"openstackclient\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.889858 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:32 crc kubenswrapper[4904]: I1121 13:56:32.996126 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.020356 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.044734 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.046811 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.055379 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.106981 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99k9x\" (UniqueName: \"kubernetes.io/projected/db36aaca-d216-45b3-b8f1-f7a94bae89e6-kube-api-access-99k9x\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.107102 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db36aaca-d216-45b3-b8f1-f7a94bae89e6-openstack-config\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.107355 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db36aaca-d216-45b3-b8f1-f7a94bae89e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.107478 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db36aaca-d216-45b3-b8f1-f7a94bae89e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.210091 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db36aaca-d216-45b3-b8f1-f7a94bae89e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.210188 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db36aaca-d216-45b3-b8f1-f7a94bae89e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.210259 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99k9x\" (UniqueName: \"kubernetes.io/projected/db36aaca-d216-45b3-b8f1-f7a94bae89e6-kube-api-access-99k9x\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.210356 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db36aaca-d216-45b3-b8f1-f7a94bae89e6-openstack-config\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.211626 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db36aaca-d216-45b3-b8f1-f7a94bae89e6-openstack-config\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.216252 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db36aaca-d216-45b3-b8f1-f7a94bae89e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.219777 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db36aaca-d216-45b3-b8f1-f7a94bae89e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.227548 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99k9x\" (UniqueName: \"kubernetes.io/projected/db36aaca-d216-45b3-b8f1-f7a94bae89e6-kube-api-access-99k9x\") pod \"openstackclient\" (UID: \"db36aaca-d216-45b3-b8f1-f7a94bae89e6\") " pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.294945 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lj7p5" event={"ID":"6201ad46-9eaf-4b17-b40f-e31756dea737","Type":"ContainerStarted","Data":"c42ed548f2712bf7945ce966c0e0510f943821ae46678a15b6272ce972ee904b"} Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.318502 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lj7p5" podStartSLOduration=3.226519789 podStartE2EDuration="1m17.318451873s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="2025-11-21 13:55:18.140120552 +0000 UTC m=+1392.261653104" lastFinishedPulling="2025-11-21 13:56:32.232052636 +0000 UTC m=+1466.353585188" observedRunningTime="2025-11-21 13:56:33.317970742 +0000 UTC m=+1467.439503294" watchObservedRunningTime="2025-11-21 13:56:33.318451873 +0000 UTC m=+1467.439984425" Nov 21 13:56:33 crc kubenswrapper[4904]: I1121 13:56:33.366455 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:33 crc kubenswrapper[4904]: E1121 13:56:33.542754 4904 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 21 13:56:33 crc kubenswrapper[4904]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_531ba4ef-f7e2-4398-b42c-76246ef4910d_0(2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69" Netns:"/var/run/netns/74de6fe7-51f4-4496-aa9e-99d26fff4e30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69;K8S_POD_UID=531ba4ef-f7e2-4398-b42c-76246ef4910d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/531ba4ef-f7e2-4398-b42c-76246ef4910d:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69 network default NAD default] [openstack/openstackclient 2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:ba [10.217.0.186/23] Nov 21 13:56:33 crc kubenswrapper[4904]: ' Nov 21 13:56:33 crc kubenswrapper[4904]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 21 13:56:33 crc kubenswrapper[4904]: > Nov 21 13:56:33 crc kubenswrapper[4904]: E1121 13:56:33.542882 4904 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 21 13:56:33 crc kubenswrapper[4904]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_531ba4ef-f7e2-4398-b42c-76246ef4910d_0(2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69" Netns:"/var/run/netns/74de6fe7-51f4-4496-aa9e-99d26fff4e30" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69;K8S_POD_UID=531ba4ef-f7e2-4398-b42c-76246ef4910d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/531ba4ef-f7e2-4398-b42c-76246ef4910d:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69 network default NAD default] [openstack/openstackclient 2d9e274baaaebc371f8a8ff90601461f1e44b353bdc0471cfcfea251a03e9e69 network default NAD default] failed to configure pod interface: canceled old pod sandbox waiting for OVS port binding for 0a:58:0a:d9:00:ba [10.217.0.186/23] Nov 21 13:56:33 crc kubenswrapper[4904]: ' Nov 21 13:56:33 crc kubenswrapper[4904]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 21 13:56:33 crc kubenswrapper[4904]: > pod="openstack/openstackclient" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.013025 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.307548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"db36aaca-d216-45b3-b8f1-f7a94bae89e6","Type":"ContainerStarted","Data":"3bd6289c24bc7761da3aaf3353b0ad37d191a8b707506096f40176a4b241fba1"} Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.307603 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.316354 4904 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="531ba4ef-f7e2-4398-b42c-76246ef4910d" podUID="db36aaca-d216-45b3-b8f1-f7a94bae89e6" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.325134 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.442177 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config-secret\") pod \"531ba4ef-f7e2-4398-b42c-76246ef4910d\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.442253 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-combined-ca-bundle\") pod \"531ba4ef-f7e2-4398-b42c-76246ef4910d\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.442302 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config\") pod \"531ba4ef-f7e2-4398-b42c-76246ef4910d\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.442498 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvrhv\" (UniqueName: \"kubernetes.io/projected/531ba4ef-f7e2-4398-b42c-76246ef4910d-kube-api-access-mvrhv\") pod \"531ba4ef-f7e2-4398-b42c-76246ef4910d\" (UID: \"531ba4ef-f7e2-4398-b42c-76246ef4910d\") " Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.443252 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "531ba4ef-f7e2-4398-b42c-76246ef4910d" (UID: "531ba4ef-f7e2-4398-b42c-76246ef4910d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.449633 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "531ba4ef-f7e2-4398-b42c-76246ef4910d" (UID: "531ba4ef-f7e2-4398-b42c-76246ef4910d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.450378 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531ba4ef-f7e2-4398-b42c-76246ef4910d-kube-api-access-mvrhv" (OuterVolumeSpecName: "kube-api-access-mvrhv") pod "531ba4ef-f7e2-4398-b42c-76246ef4910d" (UID: "531ba4ef-f7e2-4398-b42c-76246ef4910d"). InnerVolumeSpecName "kube-api-access-mvrhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.450591 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "531ba4ef-f7e2-4398-b42c-76246ef4910d" (UID: "531ba4ef-f7e2-4398-b42c-76246ef4910d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.526623 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531ba4ef-f7e2-4398-b42c-76246ef4910d" path="/var/lib/kubelet/pods/531ba4ef-f7e2-4398-b42c-76246ef4910d/volumes" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.545603 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvrhv\" (UniqueName: \"kubernetes.io/projected/531ba4ef-f7e2-4398-b42c-76246ef4910d-kube-api-access-mvrhv\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.545902 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.545981 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531ba4ef-f7e2-4398-b42c-76246ef4910d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:34 crc kubenswrapper[4904]: I1121 13:56:34.546046 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/531ba4ef-f7e2-4398-b42c-76246ef4910d-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:35 crc kubenswrapper[4904]: I1121 13:56:35.316895 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 21 13:56:35 crc kubenswrapper[4904]: I1121 13:56:35.395804 4904 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="531ba4ef-f7e2-4398-b42c-76246ef4910d" podUID="db36aaca-d216-45b3-b8f1-f7a94bae89e6" Nov 21 13:56:35 crc kubenswrapper[4904]: I1121 13:56:35.719009 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dn4kl" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" probeResult="failure" output=< Nov 21 13:56:35 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:56:35 crc kubenswrapper[4904]: > Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.334193 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qg6bd" event={"ID":"46d63715-407d-4908-a38f-5f6fd76729db","Type":"ContainerStarted","Data":"674310f15dc50a6947a9806d3777baa8f9988f25218411836f49bf8fb95c31a7"} Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.361194 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qg6bd" podStartSLOduration=3.260976079 podStartE2EDuration="1m20.361172991s" podCreationTimestamp="2025-11-21 13:55:16 +0000 UTC" firstStartedPulling="2025-11-21 13:55:18.091497388 +0000 UTC m=+1392.213029940" lastFinishedPulling="2025-11-21 13:56:35.1916943 +0000 UTC m=+1469.313226852" observedRunningTime="2025-11-21 13:56:36.354916829 +0000 UTC m=+1470.476449381" watchObservedRunningTime="2025-11-21 13:56:36.361172991 +0000 UTC m=+1470.482705543" Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.925779 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f94fcfbbf-pg26j"] Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.928283 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.937034 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.937874 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f94fcfbbf-pg26j"] Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.938455 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 21 13:56:36 crc kubenswrapper[4904]: I1121 13:56:36.938596 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.025214 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-config-data\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.025860 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-internal-tls-certs\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.025929 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-public-tls-certs\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.026073 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-combined-ca-bundle\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.026124 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9f2\" (UniqueName: \"kubernetes.io/projected/e819c802-1c71-4668-bc99-5b41cc11c656-kube-api-access-4d9f2\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.026150 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e819c802-1c71-4668-bc99-5b41cc11c656-log-httpd\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.026244 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e819c802-1c71-4668-bc99-5b41cc11c656-run-httpd\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.026278 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e819c802-1c71-4668-bc99-5b41cc11c656-etc-swift\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.127976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e819c802-1c71-4668-bc99-5b41cc11c656-run-httpd\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128040 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e819c802-1c71-4668-bc99-5b41cc11c656-etc-swift\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128073 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-config-data\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128098 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-internal-tls-certs\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128140 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-public-tls-certs\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128216 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-combined-ca-bundle\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128241 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9f2\" (UniqueName: \"kubernetes.io/projected/e819c802-1c71-4668-bc99-5b41cc11c656-kube-api-access-4d9f2\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.128261 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e819c802-1c71-4668-bc99-5b41cc11c656-log-httpd\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.129305 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e819c802-1c71-4668-bc99-5b41cc11c656-log-httpd\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.129624 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e819c802-1c71-4668-bc99-5b41cc11c656-run-httpd\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.141619 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-public-tls-certs\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.141933 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-internal-tls-certs\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.142800 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e819c802-1c71-4668-bc99-5b41cc11c656-etc-swift\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.146860 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-config-data\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.168527 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e819c802-1c71-4668-bc99-5b41cc11c656-combined-ca-bundle\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.199724 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9f2\" (UniqueName: \"kubernetes.io/projected/e819c802-1c71-4668-bc99-5b41cc11c656-kube-api-access-4d9f2\") pod \"swift-proxy-6f94fcfbbf-pg26j\" (UID: \"e819c802-1c71-4668-bc99-5b41cc11c656\") " pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.253188 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.359795 4904 generic.go:334] "Generic (PLEG): container finished" podID="6201ad46-9eaf-4b17-b40f-e31756dea737" containerID="c42ed548f2712bf7945ce966c0e0510f943821ae46678a15b6272ce972ee904b" exitCode=0 Nov 21 13:56:37 crc kubenswrapper[4904]: I1121 13:56:37.360023 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lj7p5" event={"ID":"6201ad46-9eaf-4b17-b40f-e31756dea737","Type":"ContainerDied","Data":"c42ed548f2712bf7945ce966c0e0510f943821ae46678a15b6272ce972ee904b"} Nov 21 13:56:38 crc kubenswrapper[4904]: I1121 13:56:38.029881 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f94fcfbbf-pg26j"] Nov 21 13:56:38 crc kubenswrapper[4904]: I1121 13:56:38.375570 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" event={"ID":"e819c802-1c71-4668-bc99-5b41cc11c656","Type":"ContainerStarted","Data":"dfb2526f0f35de2b7f3cf89a50eeeb8f42bb7519ee36f9f4b6a971523cfca880"} Nov 21 13:56:38 crc kubenswrapper[4904]: I1121 13:56:38.381255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerStarted","Data":"55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8"} Nov 21 13:56:38 crc kubenswrapper[4904]: I1121 13:56:38.421438 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4bgqc" podStartSLOduration=17.209064203 podStartE2EDuration="47.421416525s" podCreationTimestamp="2025-11-21 13:55:51 +0000 UTC" firstStartedPulling="2025-11-21 13:56:06.769190403 +0000 UTC m=+1440.890722955" lastFinishedPulling="2025-11-21 13:56:36.981542725 +0000 UTC m=+1471.103075277" observedRunningTime="2025-11-21 13:56:38.411300358 +0000 UTC m=+1472.532832910" watchObservedRunningTime="2025-11-21 13:56:38.421416525 +0000 UTC m=+1472.542949077" Nov 21 13:56:38 crc kubenswrapper[4904]: I1121 13:56:38.878504 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.000495 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scdpq\" (UniqueName: \"kubernetes.io/projected/6201ad46-9eaf-4b17-b40f-e31756dea737-kube-api-access-scdpq\") pod \"6201ad46-9eaf-4b17-b40f-e31756dea737\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.000749 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-combined-ca-bundle\") pod \"6201ad46-9eaf-4b17-b40f-e31756dea737\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.000956 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-db-sync-config-data\") pod \"6201ad46-9eaf-4b17-b40f-e31756dea737\" (UID: \"6201ad46-9eaf-4b17-b40f-e31756dea737\") " Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.007400 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6201ad46-9eaf-4b17-b40f-e31756dea737-kube-api-access-scdpq" (OuterVolumeSpecName: "kube-api-access-scdpq") pod "6201ad46-9eaf-4b17-b40f-e31756dea737" (UID: "6201ad46-9eaf-4b17-b40f-e31756dea737"). InnerVolumeSpecName "kube-api-access-scdpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.010825 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6201ad46-9eaf-4b17-b40f-e31756dea737" (UID: "6201ad46-9eaf-4b17-b40f-e31756dea737"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.038232 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6201ad46-9eaf-4b17-b40f-e31756dea737" (UID: "6201ad46-9eaf-4b17-b40f-e31756dea737"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.104604 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.104677 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scdpq\" (UniqueName: \"kubernetes.io/projected/6201ad46-9eaf-4b17-b40f-e31756dea737-kube-api-access-scdpq\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.104702 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6201ad46-9eaf-4b17-b40f-e31756dea737-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.394058 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" event={"ID":"e819c802-1c71-4668-bc99-5b41cc11c656","Type":"ContainerStarted","Data":"32e0824da726a582d2270c60b7bf6138279b2469f0e3c05bea043be782420ecf"} Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.394630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" event={"ID":"e819c802-1c71-4668-bc99-5b41cc11c656","Type":"ContainerStarted","Data":"74bae429accb785002e359c3750ab547f93f19162956d09e2b40150939de7d35"} Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.394742 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.394838 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.397629 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lj7p5" event={"ID":"6201ad46-9eaf-4b17-b40f-e31756dea737","Type":"ContainerDied","Data":"65b03ff57782290ab2ff1d22ba1c468839595f066d82a5f38afadba27f1b9043"} Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.397680 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b03ff57782290ab2ff1d22ba1c468839595f066d82a5f38afadba27f1b9043" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.397744 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lj7p5" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.424488 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" podStartSLOduration=3.424452841 podStartE2EDuration="3.424452841s" podCreationTimestamp="2025-11-21 13:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:39.418165567 +0000 UTC m=+1473.539698119" watchObservedRunningTime="2025-11-21 13:56:39.424452841 +0000 UTC m=+1473.545985403" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.819053 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-75d77468c8-8lrt8"] Nov 21 13:56:39 crc kubenswrapper[4904]: E1121 13:56:39.819539 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" containerName="barbican-db-sync" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.819554 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" containerName="barbican-db-sync" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.819844 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" containerName="barbican-db-sync" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.821179 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.825569 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mmtwc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.826089 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.837939 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.846603 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7f6485bcff-l7ccc"] Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.854717 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.859502 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.859554 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75d77468c8-8lrt8"] Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.909967 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f6485bcff-l7ccc"] Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929372 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-config-data\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929588 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6718b1-07c1-4270-a030-cc36bef71bbc-logs\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929635 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-config-data-custom\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929705 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b360c72-5d34-4e63-b653-3f3e80539384-logs\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929733 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q55w\" (UniqueName: \"kubernetes.io/projected/9a6718b1-07c1-4270-a030-cc36bef71bbc-kube-api-access-4q55w\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929769 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-combined-ca-bundle\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929958 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-combined-ca-bundle\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.929999 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-config-data\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.930036 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-config-data-custom\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.930169 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tsx\" (UniqueName: \"kubernetes.io/projected/8b360c72-5d34-4e63-b653-3f3e80539384-kube-api-access-b5tsx\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.988827 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-798cw"] Nov 21 13:56:39 crc kubenswrapper[4904]: I1121 13:56:39.991247 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.021725 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-798cw"] Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.032551 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6718b1-07c1-4270-a030-cc36bef71bbc-logs\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.032923 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-config-data-custom\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033160 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b360c72-5d34-4e63-b653-3f3e80539384-logs\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q55w\" (UniqueName: \"kubernetes.io/projected/9a6718b1-07c1-4270-a030-cc36bef71bbc-kube-api-access-4q55w\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033385 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-combined-ca-bundle\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033471 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033589 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs5qr\" (UniqueName: \"kubernetes.io/projected/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-kube-api-access-hs5qr\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.033719 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-combined-ca-bundle\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.034256 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-config-data\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.034364 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-config\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.034618 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-svc\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.034744 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-config-data-custom\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.034881 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5tsx\" (UniqueName: \"kubernetes.io/projected/8b360c72-5d34-4e63-b653-3f3e80539384-kube-api-access-b5tsx\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.034992 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-config-data\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.035115 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.041085 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b360c72-5d34-4e63-b653-3f3e80539384-logs\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.041119 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6718b1-07c1-4270-a030-cc36bef71bbc-logs\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.050782 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-config-data-custom\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.051362 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-config-data\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.052000 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-combined-ca-bundle\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.053407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-config-data\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.058616 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a6718b1-07c1-4270-a030-cc36bef71bbc-config-data-custom\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.062760 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69db589dbd-54tkv"] Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.064988 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.069026 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b360c72-5d34-4e63-b653-3f3e80539384-combined-ca-bundle\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.071850 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.074123 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q55w\" (UniqueName: \"kubernetes.io/projected/9a6718b1-07c1-4270-a030-cc36bef71bbc-kube-api-access-4q55w\") pod \"barbican-worker-7f6485bcff-l7ccc\" (UID: \"9a6718b1-07c1-4270-a030-cc36bef71bbc\") " pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.095710 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69db589dbd-54tkv"] Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.105245 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5tsx\" (UniqueName: \"kubernetes.io/projected/8b360c72-5d34-4e63-b653-3f3e80539384-kube-api-access-b5tsx\") pod \"barbican-keystone-listener-75d77468c8-8lrt8\" (UID: \"8b360c72-5d34-4e63-b653-3f3e80539384\") " pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138284 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f7518f-8e62-4d48-990c-1955072e5e98-logs\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138407 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2j6k\" (UniqueName: \"kubernetes.io/projected/64f7518f-8e62-4d48-990c-1955072e5e98-kube-api-access-k2j6k\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138443 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138517 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138565 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs5qr\" (UniqueName: \"kubernetes.io/projected/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-kube-api-access-hs5qr\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138584 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138607 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-combined-ca-bundle\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.138643 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data-custom\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.140270 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.140651 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-config\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.140701 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-svc\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.141427 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.141972 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-svc\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.142212 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.142621 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-config\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.162743 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs5qr\" (UniqueName: \"kubernetes.io/projected/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-kube-api-access-hs5qr\") pod \"dnsmasq-dns-85ff748b95-798cw\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.192774 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.220420 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7f6485bcff-l7ccc" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.243581 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.243630 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-combined-ca-bundle\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.243675 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data-custom\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.243753 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f7518f-8e62-4d48-990c-1955072e5e98-logs\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.243797 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2j6k\" (UniqueName: \"kubernetes.io/projected/64f7518f-8e62-4d48-990c-1955072e5e98-kube-api-access-k2j6k\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.245607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f7518f-8e62-4d48-990c-1955072e5e98-logs\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.248597 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-combined-ca-bundle\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.252875 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data-custom\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.263882 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.270215 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2j6k\" (UniqueName: \"kubernetes.io/projected/64f7518f-8e62-4d48-990c-1955072e5e98-kube-api-access-k2j6k\") pod \"barbican-api-69db589dbd-54tkv\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.322942 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:40 crc kubenswrapper[4904]: I1121 13:56:40.530270 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:42 crc kubenswrapper[4904]: I1121 13:56:42.083250 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:56:42 crc kubenswrapper[4904]: I1121 13:56:42.084129 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.161332 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4bgqc" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" probeResult="failure" output=< Nov 21 13:56:43 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:56:43 crc kubenswrapper[4904]: > Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.220313 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f4785b448-tkwjh"] Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.223795 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.227478 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.228103 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.263894 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f4785b448-tkwjh"] Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.330570 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2cl\" (UniqueName: \"kubernetes.io/projected/5533e799-de81-4937-a2ca-8876e2bf3c22-kube-api-access-4w2cl\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.330935 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-internal-tls-certs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.331034 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-config-data\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.331522 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-public-tls-certs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.331743 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5533e799-de81-4937-a2ca-8876e2bf3c22-logs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.331865 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-config-data-custom\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.332088 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-combined-ca-bundle\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434317 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-combined-ca-bundle\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434409 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w2cl\" (UniqueName: \"kubernetes.io/projected/5533e799-de81-4937-a2ca-8876e2bf3c22-kube-api-access-4w2cl\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434501 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-internal-tls-certs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434524 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-config-data\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434547 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-public-tls-certs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434615 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5533e799-de81-4937-a2ca-8876e2bf3c22-logs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.434632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-config-data-custom\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.435529 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5533e799-de81-4937-a2ca-8876e2bf3c22-logs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.443848 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-internal-tls-certs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.445787 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-config-data-custom\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.446974 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-config-data\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.447130 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-public-tls-certs\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.463623 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5533e799-de81-4937-a2ca-8876e2bf3c22-combined-ca-bundle\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.472634 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w2cl\" (UniqueName: \"kubernetes.io/projected/5533e799-de81-4937-a2ca-8876e2bf3c22-kube-api-access-4w2cl\") pod \"barbican-api-5f4785b448-tkwjh\" (UID: \"5533e799-de81-4937-a2ca-8876e2bf3c22\") " pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:43 crc kubenswrapper[4904]: I1121 13:56:43.552569 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.291529 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.292950 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-central-agent" containerID="cri-o://78f3ddcfc6799851f8926a2b46d04b80d24bd2d70f34004c4147f8ebbb4ef841" gracePeriod=30 Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.293085 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-notification-agent" containerID="cri-o://818c0a48f7100755b2b82190c5c33e8b1db45ca9da0cc93c0a5aec923c9c120b" gracePeriod=30 Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.293086 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="sg-core" containerID="cri-o://dc04f4aa0e299e3fed96b66d972c4d5f54ab3c2030a5d2cfa133e88f421edda6" gracePeriod=30 Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.293307 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="proxy-httpd" containerID="cri-o://46c261e86190cdd0a7f75cd07e30b75514747abc4bae205edf34da610c4a017a" gracePeriod=30 Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.310683 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.185:3000/\": EOF" Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.476254 4904 generic.go:334] "Generic (PLEG): container finished" podID="b8aca765-8842-4314-a89a-403377c3d5be" containerID="dc04f4aa0e299e3fed96b66d972c4d5f54ab3c2030a5d2cfa133e88f421edda6" exitCode=2 Nov 21 13:56:44 crc kubenswrapper[4904]: I1121 13:56:44.476314 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerDied","Data":"dc04f4aa0e299e3fed96b66d972c4d5f54ab3c2030a5d2cfa133e88f421edda6"} Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.068035 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-65d45d497f-rtxs5"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.070111 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.077934 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.078234 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-bnkwf" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.078238 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.120748 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-65d45d497f-rtxs5"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.187331 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-combined-ca-bundle\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.187799 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data-custom\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.187849 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.187899 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzr6\" (UniqueName: \"kubernetes.io/projected/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-kube-api-access-kvzr6\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.287352 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-798cw"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.299235 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data-custom\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.299307 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.299333 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvzr6\" (UniqueName: \"kubernetes.io/projected/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-kube-api-access-kvzr6\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.299355 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-combined-ca-bundle\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.305310 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-4bdw6"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.309610 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-combined-ca-bundle\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.311543 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.318329 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.342379 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvzr6\" (UniqueName: \"kubernetes.io/projected/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-kube-api-access-kvzr6\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.351217 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data-custom\") pod \"heat-engine-65d45d497f-rtxs5\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.363824 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-4bdw6"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.379623 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-796ccc96c5-hwn84"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.381329 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.388462 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.393324 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.403122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-svc\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.403251 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scb8s\" (UniqueName: \"kubernetes.io/projected/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-kube-api-access-scb8s\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.403319 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-config\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.404256 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-swift-storage-0\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.404347 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.404388 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.404610 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-868c98d758-4t8gx"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.411549 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.422096 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.469415 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-796ccc96c5-hwn84"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.498052 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868c98d758-4t8gx"] Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.513924 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-combined-ca-bundle\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.513997 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9jl\" (UniqueName: \"kubernetes.io/projected/569e06a9-99e0-475e-ac01-d44af882269f-kube-api-access-4z9jl\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514183 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-svc\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514276 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514398 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data-custom\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514493 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scb8s\" (UniqueName: \"kubernetes.io/projected/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-kube-api-access-scb8s\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514537 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data-custom\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514590 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-config\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514677 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514721 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-swift-storage-0\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514789 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514865 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-combined-ca-bundle\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514902 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.514930 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvdv\" (UniqueName: \"kubernetes.io/projected/51592b04-b9b0-4744-90c9-81b2f01af3ae-kube-api-access-6nvdv\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.516447 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-config\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.516586 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-swift-storage-0\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.516855 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-svc\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.517317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-sb\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.517530 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-nb\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.530166 4904 generic.go:334] "Generic (PLEG): container finished" podID="b8aca765-8842-4314-a89a-403377c3d5be" containerID="46c261e86190cdd0a7f75cd07e30b75514747abc4bae205edf34da610c4a017a" exitCode=0 Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.530208 4904 generic.go:334] "Generic (PLEG): container finished" podID="b8aca765-8842-4314-a89a-403377c3d5be" containerID="78f3ddcfc6799851f8926a2b46d04b80d24bd2d70f34004c4147f8ebbb4ef841" exitCode=0 Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.530234 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerDied","Data":"46c261e86190cdd0a7f75cd07e30b75514747abc4bae205edf34da610c4a017a"} Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.530268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerDied","Data":"78f3ddcfc6799851f8926a2b46d04b80d24bd2d70f34004c4147f8ebbb4ef841"} Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.540129 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scb8s\" (UniqueName: \"kubernetes.io/projected/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-kube-api-access-scb8s\") pod \"dnsmasq-dns-76ff85fb9f-4bdw6\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.617769 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data-custom\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.617871 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data-custom\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.617970 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.618080 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-combined-ca-bundle\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.618125 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nvdv\" (UniqueName: \"kubernetes.io/projected/51592b04-b9b0-4744-90c9-81b2f01af3ae-kube-api-access-6nvdv\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.618224 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-combined-ca-bundle\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.618254 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9jl\" (UniqueName: \"kubernetes.io/projected/569e06a9-99e0-475e-ac01-d44af882269f-kube-api-access-4z9jl\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.618403 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.632866 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-combined-ca-bundle\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.641623 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-combined-ca-bundle\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.642642 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.644093 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.647465 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nvdv\" (UniqueName: \"kubernetes.io/projected/51592b04-b9b0-4744-90c9-81b2f01af3ae-kube-api-access-6nvdv\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.650512 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data-custom\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.654797 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data-custom\") pod \"heat-cfnapi-796ccc96c5-hwn84\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.659231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9jl\" (UniqueName: \"kubernetes.io/projected/569e06a9-99e0-475e-ac01-d44af882269f-kube-api-access-4z9jl\") pod \"heat-api-868c98d758-4t8gx\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.725052 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dn4kl" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" probeResult="failure" output=< Nov 21 13:56:45 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:56:45 crc kubenswrapper[4904]: > Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.775959 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.793180 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:56:45 crc kubenswrapper[4904]: I1121 13:56:45.813876 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:56:47 crc kubenswrapper[4904]: I1121 13:56:47.267373 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:47 crc kubenswrapper[4904]: I1121 13:56:47.276028 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" Nov 21 13:56:47 crc kubenswrapper[4904]: I1121 13:56:47.566310 4904 generic.go:334] "Generic (PLEG): container finished" podID="46d63715-407d-4908-a38f-5f6fd76729db" containerID="674310f15dc50a6947a9806d3777baa8f9988f25218411836f49bf8fb95c31a7" exitCode=0 Nov 21 13:56:47 crc kubenswrapper[4904]: I1121 13:56:47.567494 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qg6bd" event={"ID":"46d63715-407d-4908-a38f-5f6fd76729db","Type":"ContainerDied","Data":"674310f15dc50a6947a9806d3777baa8f9988f25218411836f49bf8fb95c31a7"} Nov 21 13:56:48 crc kubenswrapper[4904]: I1121 13:56:48.613898 4904 generic.go:334] "Generic (PLEG): container finished" podID="b8aca765-8842-4314-a89a-403377c3d5be" containerID="818c0a48f7100755b2b82190c5c33e8b1db45ca9da0cc93c0a5aec923c9c120b" exitCode=0 Nov 21 13:56:48 crc kubenswrapper[4904]: I1121 13:56:48.614159 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerDied","Data":"818c0a48f7100755b2b82190c5c33e8b1db45ca9da0cc93c0a5aec923c9c120b"} Nov 21 13:56:49 crc kubenswrapper[4904]: I1121 13:56:49.105457 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:49 crc kubenswrapper[4904]: I1121 13:56:49.139622 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f758bd6b6-r5mk2" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.485467 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.531786 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-dc7665556-sgmcf" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.180:9696/\": dial tcp 10.217.0.180:9696: connect: connection refused" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568278 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-config-data\") pod \"46d63715-407d-4908-a38f-5f6fd76729db\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568369 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-combined-ca-bundle\") pod \"46d63715-407d-4908-a38f-5f6fd76729db\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568437 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-db-sync-config-data\") pod \"46d63715-407d-4908-a38f-5f6fd76729db\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568605 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46d63715-407d-4908-a38f-5f6fd76729db-etc-machine-id\") pod \"46d63715-407d-4908-a38f-5f6fd76729db\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568755 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-scripts\") pod \"46d63715-407d-4908-a38f-5f6fd76729db\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568847 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-874vl\" (UniqueName: \"kubernetes.io/projected/46d63715-407d-4908-a38f-5f6fd76729db-kube-api-access-874vl\") pod \"46d63715-407d-4908-a38f-5f6fd76729db\" (UID: \"46d63715-407d-4908-a38f-5f6fd76729db\") " Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.568835 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46d63715-407d-4908-a38f-5f6fd76729db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "46d63715-407d-4908-a38f-5f6fd76729db" (UID: "46d63715-407d-4908-a38f-5f6fd76729db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.569710 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/46d63715-407d-4908-a38f-5f6fd76729db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.596587 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46d63715-407d-4908-a38f-5f6fd76729db-kube-api-access-874vl" (OuterVolumeSpecName: "kube-api-access-874vl") pod "46d63715-407d-4908-a38f-5f6fd76729db" (UID: "46d63715-407d-4908-a38f-5f6fd76729db"). InnerVolumeSpecName "kube-api-access-874vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.605984 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-scripts" (OuterVolumeSpecName: "scripts") pod "46d63715-407d-4908-a38f-5f6fd76729db" (UID: "46d63715-407d-4908-a38f-5f6fd76729db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.606635 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "46d63715-407d-4908-a38f-5f6fd76729db" (UID: "46d63715-407d-4908-a38f-5f6fd76729db"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.666110 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46d63715-407d-4908-a38f-5f6fd76729db" (UID: "46d63715-407d-4908-a38f-5f6fd76729db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.672965 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-874vl\" (UniqueName: \"kubernetes.io/projected/46d63715-407d-4908-a38f-5f6fd76729db-kube-api-access-874vl\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.673357 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.673452 4904 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.673517 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.686901 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qg6bd" event={"ID":"46d63715-407d-4908-a38f-5f6fd76729db","Type":"ContainerDied","Data":"1b43428dfefc73094af84fa81b30675a017729d68477191dec66c71f491aed6c"} Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.687149 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b43428dfefc73094af84fa81b30675a017729d68477191dec66c71f491aed6c" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.687003 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qg6bd" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.730244 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-config-data" (OuterVolumeSpecName: "config-data") pod "46d63715-407d-4908-a38f-5f6fd76729db" (UID: "46d63715-407d-4908-a38f-5f6fd76729db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:50 crc kubenswrapper[4904]: E1121 13:56:50.771517 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Nov 21 13:56:50 crc kubenswrapper[4904]: E1121 13:56:50.772087 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64hc6h59hb8h66ch5cch67dh54dh648h59hb6h5fh5c6h646h9ch67fh5fbh694h5cbh56bh586h7ch54chb8h588h56dh654h57dhcbh54bh645h5dq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99k9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(db36aaca-d216-45b3-b8f1-f7a94bae89e6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 13:56:50 crc kubenswrapper[4904]: E1121 13:56:50.773472 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="db36aaca-d216-45b3-b8f1-f7a94bae89e6" Nov 21 13:56:50 crc kubenswrapper[4904]: I1121 13:56:50.780189 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46d63715-407d-4908-a38f-5f6fd76729db-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.355363 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.502754 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-log-httpd\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.502938 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-sg-core-conf-yaml\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.502976 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-config-data\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.503069 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl75z\" (UniqueName: \"kubernetes.io/projected/b8aca765-8842-4314-a89a-403377c3d5be-kube-api-access-dl75z\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.503141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-run-httpd\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.503281 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-scripts\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.503329 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-combined-ca-bundle\") pod \"b8aca765-8842-4314-a89a-403377c3d5be\" (UID: \"b8aca765-8842-4314-a89a-403377c3d5be\") " Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.511405 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.524151 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.544813 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8aca765-8842-4314-a89a-403377c3d5be-kube-api-access-dl75z" (OuterVolumeSpecName: "kube-api-access-dl75z") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "kube-api-access-dl75z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.550032 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-scripts" (OuterVolumeSpecName: "scripts") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.609711 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl75z\" (UniqueName: \"kubernetes.io/projected/b8aca765-8842-4314-a89a-403377c3d5be-kube-api-access-dl75z\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.610150 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.610160 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.610170 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8aca765-8842-4314-a89a-403377c3d5be-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.622325 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.649090 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-796ccc96c5-hwn84"] Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.720126 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.759510 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-798cw"] Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.770952 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-798cw" event={"ID":"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec","Type":"ContainerStarted","Data":"d69d8f60db7a66140a1ae07e6d9df424243e98eaf5003256ac9bcd1708741691"} Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.787947 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69db589dbd-54tkv" event={"ID":"64f7518f-8e62-4d48-990c-1955072e5e98","Type":"ContainerStarted","Data":"8e390e725041890a275775b17f9c04bfc6c2e8d710c59beeaba4ec822cb47b80"} Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.814196 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" event={"ID":"51592b04-b9b0-4744-90c9-81b2f01af3ae","Type":"ContainerStarted","Data":"ad325c93fc9ee7dd3eeff0e7d8174f157218c11027704dc7351bf2186e08c15c"} Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.845475 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69db589dbd-54tkv"] Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.902921 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:56:51 crc kubenswrapper[4904]: E1121 13:56:51.903676 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-notification-agent" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.903696 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-notification-agent" Nov 21 13:56:51 crc kubenswrapper[4904]: E1121 13:56:51.903723 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="sg-core" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.903731 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="sg-core" Nov 21 13:56:51 crc kubenswrapper[4904]: E1121 13:56:51.903751 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="proxy-httpd" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.903758 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="proxy-httpd" Nov 21 13:56:51 crc kubenswrapper[4904]: E1121 13:56:51.903770 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46d63715-407d-4908-a38f-5f6fd76729db" containerName="cinder-db-sync" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.903777 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46d63715-407d-4908-a38f-5f6fd76729db" containerName="cinder-db-sync" Nov 21 13:56:51 crc kubenswrapper[4904]: E1121 13:56:51.903803 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-central-agent" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.903811 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-central-agent" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.904086 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="sg-core" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.904100 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-notification-agent" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.904126 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="ceilometer-central-agent" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.904140 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="46d63715-407d-4908-a38f-5f6fd76729db" containerName="cinder-db-sync" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.904158 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aca765-8842-4314-a89a-403377c3d5be" containerName="proxy-httpd" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.905786 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.908181 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.908399 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8aca765-8842-4314-a89a-403377c3d5be","Type":"ContainerDied","Data":"c369637602ab74f9e7dfdc10fc01ac49680d551695f16e99dd4bba456d4d830e"} Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.908441 4904 scope.go:117] "RemoveContainer" containerID="46c261e86190cdd0a7f75cd07e30b75514747abc4bae205edf34da610c4a017a" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.915346 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.915556 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhklw" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.915696 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.915820 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.920395 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-config-data" (OuterVolumeSpecName: "config-data") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: E1121 13:56:51.923798 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="db36aaca-d216-45b3-b8f1-f7a94bae89e6" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.925861 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.929859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8aca765-8842-4314-a89a-403377c3d5be" (UID: "b8aca765-8842-4314-a89a-403377c3d5be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:51 crc kubenswrapper[4904]: I1121 13:56:51.959973 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.009362 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-4bdw6"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.032793 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.032849 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.032904 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.032980 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bnqk\" (UniqueName: \"kubernetes.io/projected/a5788166-ebcd-434c-8684-3b2a5bbed6df-kube-api-access-2bnqk\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.038612 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5788166-ebcd-434c-8684-3b2a5bbed6df-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.038728 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.039723 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8aca765-8842-4314-a89a-403377c3d5be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.084741 4904 scope.go:117] "RemoveContainer" containerID="dc04f4aa0e299e3fed96b66d972c4d5f54ab3c2030a5d2cfa133e88f421edda6" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.143014 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.143096 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.143148 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.143425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bnqk\" (UniqueName: \"kubernetes.io/projected/a5788166-ebcd-434c-8684-3b2a5bbed6df-kube-api-access-2bnqk\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.143790 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5788166-ebcd-434c-8684-3b2a5bbed6df-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.143859 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.153199 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5788166-ebcd-434c-8684-3b2a5bbed6df-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.158340 4904 scope.go:117] "RemoveContainer" containerID="818c0a48f7100755b2b82190c5c33e8b1db45ca9da0cc93c0a5aec923c9c120b" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.163234 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.173187 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.181711 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.183558 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-scripts\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.195243 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5982r"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.197772 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.217219 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bnqk\" (UniqueName: \"kubernetes.io/projected/a5788166-ebcd-434c-8684-3b2a5bbed6df-kube-api-access-2bnqk\") pod \"cinder-scheduler-0\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.303327 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.332481 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5982r"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.350447 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.350580 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.350611 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqwn7\" (UniqueName: \"kubernetes.io/projected/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-kube-api-access-tqwn7\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.350809 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.350889 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.351028 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-config\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.359891 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.362028 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.365020 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.379721 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.396001 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-4bdw6"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.408952 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75d77468c8-8lrt8"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.423009 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-65d45d497f-rtxs5"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458535 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458613 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqwn7\" (UniqueName: \"kubernetes.io/projected/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-kube-api-access-tqwn7\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a33906f-29de-4dea-bd13-a149e36b146c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458709 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9trl\" (UniqueName: \"kubernetes.io/projected/4a33906f-29de-4dea-bd13-a149e36b146c-kube-api-access-h9trl\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458833 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.458969 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-config\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.459154 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-scripts\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.459182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.459245 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data-custom\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.459277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.459297 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a33906f-29de-4dea-bd13-a149e36b146c-logs\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.459334 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.460422 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.465226 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.470281 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-config\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.475320 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.477519 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f4785b448-tkwjh"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.493206 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.498480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqwn7\" (UniqueName: \"kubernetes.io/projected/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-kube-api-access-tqwn7\") pod \"dnsmasq-dns-7756b9d78c-5982r\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.565916 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a33906f-29de-4dea-bd13-a149e36b146c-logs\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566307 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566372 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a33906f-29de-4dea-bd13-a149e36b146c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566403 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9trl\" (UniqueName: \"kubernetes.io/projected/4a33906f-29de-4dea-bd13-a149e36b146c-kube-api-access-h9trl\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566521 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-scripts\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data-custom\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.566925 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a33906f-29de-4dea-bd13-a149e36b146c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.568174 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7f6485bcff-l7ccc"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.568217 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-868c98d758-4t8gx"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.568906 4904 scope.go:117] "RemoveContainer" containerID="78f3ddcfc6799851f8926a2b46d04b80d24bd2d70f34004c4147f8ebbb4ef841" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.571860 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a33906f-29de-4dea-bd13-a149e36b146c-logs\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.610786 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data-custom\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.613512 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.614282 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.623038 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-scripts\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.623998 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.624729 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.638758 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9trl\" (UniqueName: \"kubernetes.io/projected/4a33906f-29de-4dea-bd13-a149e36b146c-kube-api-access-h9trl\") pod \"cinder-api-0\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.648705 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.657568 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.660988 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.661221 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.676298 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.773025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-config-data\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.773481 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt76t\" (UniqueName: \"kubernetes.io/projected/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-kube-api-access-xt76t\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.773960 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-log-httpd\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.774051 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-run-httpd\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.774090 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.774284 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-scripts\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.774360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.877794 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt76t\" (UniqueName: \"kubernetes.io/projected/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-kube-api-access-xt76t\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.878157 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-log-httpd\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.878189 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-run-httpd\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.878211 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.878252 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-scripts\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.878278 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.879322 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-run-httpd\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.879424 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-config-data\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.879617 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-log-httpd\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.925461 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.931348 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.933880 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-config-data\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.934493 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-scripts\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.934895 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt76t\" (UniqueName: \"kubernetes.io/projected/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-kube-api-access-xt76t\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.935468 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " pod="openstack/ceilometer-0" Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.951802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f4785b448-tkwjh" event={"ID":"5533e799-de81-4937-a2ca-8876e2bf3c22","Type":"ContainerStarted","Data":"c1385d1ee20e24cb8854a797de1c5180ead90cbbf18940baf65445f1b8ebdfc4"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.954326 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69db589dbd-54tkv" event={"ID":"64f7518f-8e62-4d48-990c-1955072e5e98","Type":"ContainerStarted","Data":"a84c619f0ce01e59cef47d5478b7c57bc6cd6b9594e08f81bd6b31cd96f87dd0"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.958349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" event={"ID":"8b360c72-5d34-4e63-b653-3f3e80539384","Type":"ContainerStarted","Data":"513b0a127adaad335638381e4840cc596e1d9ce0366b8210889ec3172b82a2cd"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.965349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f6485bcff-l7ccc" event={"ID":"9a6718b1-07c1-4270-a030-cc36bef71bbc","Type":"ContainerStarted","Data":"45cde09c8c813586a0b78beff5ce4df686d1d6d6d1fa1485d9e48cdf86b7d99e"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.968492 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" event={"ID":"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d","Type":"ContainerStarted","Data":"3b221828d099f9b7bd7f29e69a2f58e5ed80427c0cd4a8a813f24566e33299a1"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.970565 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868c98d758-4t8gx" event={"ID":"569e06a9-99e0-475e-ac01-d44af882269f","Type":"ContainerStarted","Data":"62507ccc0a7100daf9c0d7bca586bc957d2794b663b84d993756b5fe0377bf62"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.976057 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65d45d497f-rtxs5" event={"ID":"3c42062c-c3df-4382-8bd5-0b9bb6cb711a","Type":"ContainerStarted","Data":"edd81a224de63ff330684297fce1cf81e5fbf28cbd26e5502fe1823bd531513b"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.980405 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-798cw" event={"ID":"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec","Type":"ContainerStarted","Data":"89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f"} Nov 21 13:56:52 crc kubenswrapper[4904]: I1121 13:56:52.986603 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.196415 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.289239 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4bgqc" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" probeResult="failure" output=< Nov 21 13:56:53 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:56:53 crc kubenswrapper[4904]: > Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.457124 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5982r"] Nov 21 13:56:53 crc kubenswrapper[4904]: W1121 13:56:53.566552 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89f9a34d_7ac6_4131_a9d2_af0019e6b5a9.slice/crio-b0fab927f79b610d885a230e82bf4e59e6c6a63360bd58669dea924b45f4aeae WatchSource:0}: Error finding container b0fab927f79b610d885a230e82bf4e59e6c6a63360bd58669dea924b45f4aeae: Status 404 returned error can't find the container with id b0fab927f79b610d885a230e82bf4e59e6c6a63360bd58669dea924b45f4aeae Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.726153 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.815281 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-swift-storage-0\") pod \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.815899 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-svc\") pod \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.815983 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-config\") pod \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.816295 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs5qr\" (UniqueName: \"kubernetes.io/projected/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-kube-api-access-hs5qr\") pod \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.816320 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-sb\") pod \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.816377 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-nb\") pod \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\" (UID: \"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec\") " Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.843738 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-kube-api-access-hs5qr" (OuterVolumeSpecName: "kube-api-access-hs5qr") pod "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" (UID: "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec"). InnerVolumeSpecName "kube-api-access-hs5qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.885747 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" (UID: "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.914088 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.962966 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs5qr\" (UniqueName: \"kubernetes.io/projected/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-kube-api-access-hs5qr\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:53 crc kubenswrapper[4904]: I1121 13:56:53.963009 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.003627 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" (UID: "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.039190 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.068193 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.082467 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-config" (OuterVolumeSpecName: "config") pod "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" (UID: "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.082646 4904 generic.go:334] "Generic (PLEG): container finished" podID="d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" containerID="66b3bf304018362ac30d2bd513ab705238711b9fd1f74a257fd35bdc11114c9d" exitCode=0 Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.082961 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" event={"ID":"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d","Type":"ContainerDied","Data":"66b3bf304018362ac30d2bd513ab705238711b9fd1f74a257fd35bdc11114c9d"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.088633 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" (UID: "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.107512 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" (UID: "6fa1cfed-1058-4c7d-8efb-6e27fabc2bec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.146864 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dc7665556-sgmcf_c33d7171-905b-4832-8c9c-f0a05b77fde4/neutron-api/0.log" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.146928 4904 generic.go:334] "Generic (PLEG): container finished" podID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerID="ee30ae8fa3265288b5166789e0c11461d588634adaf461db976121e0a6307b02" exitCode=137 Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.147028 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc7665556-sgmcf" event={"ID":"c33d7171-905b-4832-8c9c-f0a05b77fde4","Type":"ContainerDied","Data":"ee30ae8fa3265288b5166789e0c11461d588634adaf461db976121e0a6307b02"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.164127 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5788166-ebcd-434c-8684-3b2a5bbed6df","Type":"ContainerStarted","Data":"5f65491b313874e534b2b63b510f956e98037fd2ab35713bf0204c8ca3d4a9f6"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.170143 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.170183 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.170194 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.193725 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65d45d497f-rtxs5" event={"ID":"3c42062c-c3df-4382-8bd5-0b9bb6cb711a","Type":"ContainerStarted","Data":"70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.194322 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.230555 4904 generic.go:334] "Generic (PLEG): container finished" podID="6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" containerID="89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f" exitCode=0 Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.231058 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-798cw" event={"ID":"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec","Type":"ContainerDied","Data":"89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.231094 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-798cw" event={"ID":"6fa1cfed-1058-4c7d-8efb-6e27fabc2bec","Type":"ContainerDied","Data":"d69d8f60db7a66140a1ae07e6d9df424243e98eaf5003256ac9bcd1708741691"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.231115 4904 scope.go:117] "RemoveContainer" containerID="89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.231279 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-798cw" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.256571 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-65d45d497f-rtxs5" podStartSLOduration=9.256541808 podStartE2EDuration="9.256541808s" podCreationTimestamp="2025-11-21 13:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:54.237197617 +0000 UTC m=+1488.358730169" watchObservedRunningTime="2025-11-21 13:56:54.256541808 +0000 UTC m=+1488.378074350" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.266926 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f4785b448-tkwjh" event={"ID":"5533e799-de81-4937-a2ca-8876e2bf3c22","Type":"ContainerStarted","Data":"bba3b57c2932cf832d0cecf98607ef518d606b985574cbd60467f60e4742bf8d"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.307494 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" event={"ID":"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9","Type":"ContainerStarted","Data":"b0fab927f79b610d885a230e82bf4e59e6c6a63360bd58669dea924b45f4aeae"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.315347 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69db589dbd-54tkv" event={"ID":"64f7518f-8e62-4d48-990c-1955072e5e98","Type":"ContainerStarted","Data":"5f74120aee934e9c71b803ec0b7c39d054a2f274d89519e725064c2971707325"} Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.319686 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.319729 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.361505 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69db589dbd-54tkv" podStartSLOduration=15.361480035 podStartE2EDuration="15.361480035s" podCreationTimestamp="2025-11-21 13:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:54.353730587 +0000 UTC m=+1488.475263139" watchObservedRunningTime="2025-11-21 13:56:54.361480035 +0000 UTC m=+1488.483012587" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.478340 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-798cw"] Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.556414 4904 scope.go:117] "RemoveContainer" containerID="89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f" Nov 21 13:56:54 crc kubenswrapper[4904]: E1121 13:56:54.575250 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f\": container with ID starting with 89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f not found: ID does not exist" containerID="89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.575325 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f"} err="failed to get container status \"89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f\": rpc error: code = NotFound desc = could not find container \"89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f\": container with ID starting with 89699dd9c38c7530462a0645d66086ea107649b9f266f41736b18bd7d4c4448f not found: ID does not exist" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.592293 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8aca765-8842-4314-a89a-403377c3d5be" path="/var/lib/kubelet/pods/b8aca765-8842-4314-a89a-403377c3d5be/volumes" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.599266 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-798cw"] Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.710180 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dc7665556-sgmcf_c33d7171-905b-4832-8c9c-f0a05b77fde4/neutron-api/0.log" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.710278 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.765289 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.817054 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.822624 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-combined-ca-bundle\") pod \"c33d7171-905b-4832-8c9c-f0a05b77fde4\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.822717 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-config\") pod \"c33d7171-905b-4832-8c9c-f0a05b77fde4\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.822751 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-ovndb-tls-certs\") pod \"c33d7171-905b-4832-8c9c-f0a05b77fde4\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.822917 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-httpd-config\") pod \"c33d7171-905b-4832-8c9c-f0a05b77fde4\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.823202 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddvz9\" (UniqueName: \"kubernetes.io/projected/c33d7171-905b-4832-8c9c-f0a05b77fde4-kube-api-access-ddvz9\") pod \"c33d7171-905b-4832-8c9c-f0a05b77fde4\" (UID: \"c33d7171-905b-4832-8c9c-f0a05b77fde4\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.861884 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c33d7171-905b-4832-8c9c-f0a05b77fde4" (UID: "c33d7171-905b-4832-8c9c-f0a05b77fde4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.867477 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33d7171-905b-4832-8c9c-f0a05b77fde4-kube-api-access-ddvz9" (OuterVolumeSpecName: "kube-api-access-ddvz9") pod "c33d7171-905b-4832-8c9c-f0a05b77fde4" (UID: "c33d7171-905b-4832-8c9c-f0a05b77fde4"). InnerVolumeSpecName "kube-api-access-ddvz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.926573 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-sb\") pod \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.926661 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-svc\") pod \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.926926 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-config\") pod \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.926960 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-swift-storage-0\") pod \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.927840 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scb8s\" (UniqueName: \"kubernetes.io/projected/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-kube-api-access-scb8s\") pod \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.927880 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-nb\") pod \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\" (UID: \"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d\") " Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.929454 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.929478 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddvz9\" (UniqueName: \"kubernetes.io/projected/c33d7171-905b-4832-8c9c-f0a05b77fde4-kube-api-access-ddvz9\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.966154 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-kube-api-access-scb8s" (OuterVolumeSpecName: "kube-api-access-scb8s") pod "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" (UID: "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d"). InnerVolumeSpecName "kube-api-access-scb8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.973144 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" (UID: "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:54 crc kubenswrapper[4904]: I1121 13:56:54.983337 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" (UID: "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:54.996667 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" (UID: "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.002401 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" (UID: "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.002526 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-config" (OuterVolumeSpecName: "config") pod "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" (UID: "d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.007028 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-config" (OuterVolumeSpecName: "config") pod "c33d7171-905b-4832-8c9c-f0a05b77fde4" (UID: "c33d7171-905b-4832-8c9c-f0a05b77fde4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031792 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031827 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031839 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031855 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scb8s\" (UniqueName: \"kubernetes.io/projected/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-kube-api-access-scb8s\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031865 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031874 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.031882 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.037963 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c33d7171-905b-4832-8c9c-f0a05b77fde4" (UID: "c33d7171-905b-4832-8c9c-f0a05b77fde4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.074106 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.109859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c33d7171-905b-4832-8c9c-f0a05b77fde4" (UID: "c33d7171-905b-4832-8c9c-f0a05b77fde4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.141045 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dn4kl"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.141491 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.142026 4904 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c33d7171-905b-4832-8c9c-f0a05b77fde4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.433201 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dc7665556-sgmcf_c33d7171-905b-4832-8c9c-f0a05b77fde4/neutron-api/0.log" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.433369 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dc7665556-sgmcf" event={"ID":"c33d7171-905b-4832-8c9c-f0a05b77fde4","Type":"ContainerDied","Data":"8550dafa71d95ac13ebbc7b305184e6d87001d168e4e763758715b8c3d3b9c0e"} Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.433439 4904 scope.go:117] "RemoveContainer" containerID="cd79e26b239ca7f5275e1ca2d56bb42cd402ece4f629dd476ba443895eb8fcc8" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.433711 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dc7665556-sgmcf" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.459122 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-f99cf4f76-lnstx"] Nov 21 13:56:55 crc kubenswrapper[4904]: E1121 13:56:55.460350 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" containerName="init" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.460366 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" containerName="init" Nov 21 13:56:55 crc kubenswrapper[4904]: E1121 13:56:55.460400 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.460410 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" Nov 21 13:56:55 crc kubenswrapper[4904]: E1121 13:56:55.460532 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" containerName="init" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.460543 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" containerName="init" Nov 21 13:56:55 crc kubenswrapper[4904]: E1121 13:56:55.460566 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-api" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.460576 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-api" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.461071 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-api" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.461118 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" containerName="neutron-httpd" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.461140 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" containerName="init" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.461159 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" containerName="init" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.462762 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.525534 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerStarted","Data":"6a777386dfc640c7987472952aa964912025cf28c757758d3276221b8a3ec0c3"} Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.584295 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-f99cf4f76-lnstx"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.606207 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.607119 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-combined-ca-bundle\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.607300 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f4785b448-tkwjh" event={"ID":"5533e799-de81-4937-a2ca-8876e2bf3c22","Type":"ContainerStarted","Data":"ab2f9d1c7d2fda0dfef0614a2d7a7d0fcd0e609df77ab566c16da46d71b29497"} Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.607498 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26fd\" (UniqueName: \"kubernetes.io/projected/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-kube-api-access-l26fd\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.607679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data-custom\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.607882 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.608140 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.609146 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.624026 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-667fdcb7f-d7lkr"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.625752 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.627908 4904 generic.go:334] "Generic (PLEG): container finished" podID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerID="ce128c5284eb63f237987df31b4fae389007729114d1f70ea64bef0475e07b58" exitCode=0 Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.628227 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" event={"ID":"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9","Type":"ContainerDied","Data":"ce128c5284eb63f237987df31b4fae389007729114d1f70ea64bef0475e07b58"} Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.641430 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-667fdcb7f-d7lkr"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.658681 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" event={"ID":"d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d","Type":"ContainerDied","Data":"3b221828d099f9b7bd7f29e69a2f58e5ed80427c0cd4a8a813f24566e33299a1"} Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.658813 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76ff85fb9f-4bdw6" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.669864 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a33906f-29de-4dea-bd13-a149e36b146c","Type":"ContainerStarted","Data":"bf049b8f6afacb0a59a5a8110bf31b671e23836bd7925315592f65dd0cf38520"} Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.695792 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f5858899-skqzp"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.698756 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719283 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719362 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719408 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data-custom\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719509 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-combined-ca-bundle\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-combined-ca-bundle\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719754 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfch\" (UniqueName: \"kubernetes.io/projected/52194ea9-32ec-4f75-9365-37e63017872a-kube-api-access-5qfch\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719781 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26fd\" (UniqueName: \"kubernetes.io/projected/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-kube-api-access-l26fd\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.719861 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data-custom\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.744772 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-combined-ca-bundle\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.751762 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.757728 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data-custom\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.768281 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26fd\" (UniqueName: \"kubernetes.io/projected/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-kube-api-access-l26fd\") pod \"heat-engine-f99cf4f76-lnstx\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.768368 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f5858899-skqzp"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.803991 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dc7665556-sgmcf"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.832019 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-combined-ca-bundle\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.832722 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data-custom\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.835122 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-combined-ca-bundle\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.835386 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw88p\" (UniqueName: \"kubernetes.io/projected/1eb6c970-c61d-48ee-9647-cde886a67028-kube-api-access-cw88p\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.835687 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfch\" (UniqueName: \"kubernetes.io/projected/52194ea9-32ec-4f75-9365-37e63017872a-kube-api-access-5qfch\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.843136 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.843960 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data-custom\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.844295 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data-custom\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.844756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.846823 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dc7665556-sgmcf"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.851134 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-combined-ca-bundle\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.854463 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f4785b448-tkwjh" podStartSLOduration=12.854440237 podStartE2EDuration="12.854440237s" podCreationTimestamp="2025-11-21 13:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:56:55.640539337 +0000 UTC m=+1489.762071889" watchObservedRunningTime="2025-11-21 13:56:55.854440237 +0000 UTC m=+1489.975972799" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.867594 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfch\" (UniqueName: \"kubernetes.io/projected/52194ea9-32ec-4f75-9365-37e63017872a-kube-api-access-5qfch\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.870411 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data\") pod \"heat-cfnapi-667fdcb7f-d7lkr\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.886032 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.928127 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-4bdw6"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.937965 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76ff85fb9f-4bdw6"] Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.949879 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-combined-ca-bundle\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.950045 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw88p\" (UniqueName: \"kubernetes.io/projected/1eb6c970-c61d-48ee-9647-cde886a67028-kube-api-access-cw88p\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.950165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.950242 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data-custom\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.961746 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.963853 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data-custom\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:55 crc kubenswrapper[4904]: I1121 13:56:55.971157 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.000537 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw88p\" (UniqueName: \"kubernetes.io/projected/1eb6c970-c61d-48ee-9647-cde886a67028-kube-api-access-cw88p\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.021645 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-combined-ca-bundle\") pod \"heat-api-6f5858899-skqzp\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.032548 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.528946 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa1cfed-1058-4c7d-8efb-6e27fabc2bec" path="/var/lib/kubelet/pods/6fa1cfed-1058-4c7d-8efb-6e27fabc2bec/volumes" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.530398 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33d7171-905b-4832-8c9c-f0a05b77fde4" path="/var/lib/kubelet/pods/c33d7171-905b-4832-8c9c-f0a05b77fde4/volumes" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.531206 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d" path="/var/lib/kubelet/pods/d57fadd7-3b27-4ebb-9e6a-fb7c5f16ea3d/volumes" Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.694478 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5788166-ebcd-434c-8684-3b2a5bbed6df","Type":"ContainerStarted","Data":"9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170"} Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.698063 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerStarted","Data":"7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d"} Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.702676 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a33906f-29de-4dea-bd13-a149e36b146c","Type":"ContainerStarted","Data":"52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4"} Nov 21 13:56:56 crc kubenswrapper[4904]: I1121 13:56:56.703551 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dn4kl" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" containerID="cri-o://5217bf07d7d164f945a878a49a2fe9600164dacc19f671ff28ee86cd7fb52054" gracePeriod=2 Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.551031 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-868c98d758-4t8gx"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.584882 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-796ccc96c5-hwn84"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.605138 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-67b7b4f58b-np6xc"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.615149 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.634482 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67b7b4f58b-np6xc"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.635010 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.635289 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.659389 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-655bcb7b5f-7cf7z"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.661233 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.667234 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.683236 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.687885 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-655bcb7b5f-7cf7z"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.707963 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data-custom\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.710515 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-internal-tls-certs\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.710624 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-public-tls-certs\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.710782 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-combined-ca-bundle\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.711560 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shkgw\" (UniqueName: \"kubernetes.io/projected/17e8035b-39b0-4410-805f-635ed96e3e46-kube-api-access-shkgw\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.711661 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.792439 4904 generic.go:334] "Generic (PLEG): container finished" podID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerID="5217bf07d7d164f945a878a49a2fe9600164dacc19f671ff28ee86cd7fb52054" exitCode=0 Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.792609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn4kl" event={"ID":"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a","Type":"ContainerDied","Data":"5217bf07d7d164f945a878a49a2fe9600164dacc19f671ff28ee86cd7fb52054"} Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.813859 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.813937 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data-custom\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.813984 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-public-tls-certs\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814051 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-internal-tls-certs\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814082 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data-custom\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-internal-tls-certs\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814144 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-public-tls-certs\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-combined-ca-bundle\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814318 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814360 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-combined-ca-bundle\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814389 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shkgw\" (UniqueName: \"kubernetes.io/projected/17e8035b-39b0-4410-805f-635ed96e3e46-kube-api-access-shkgw\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.814426 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x85w6\" (UniqueName: \"kubernetes.io/projected/4316a7f8-1be9-410c-a536-28107c101ac5-kube-api-access-x85w6\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.837994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data-custom\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.839289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.840743 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-internal-tls-certs\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.850543 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-public-tls-certs\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.851160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shkgw\" (UniqueName: \"kubernetes.io/projected/17e8035b-39b0-4410-805f-635ed96e3e46-kube-api-access-shkgw\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.851314 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-combined-ca-bundle\") pod \"heat-api-67b7b4f58b-np6xc\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.882462 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2wkx4"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.884466 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.898655 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2wkx4"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.917662 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x85w6\" (UniqueName: \"kubernetes.io/projected/4316a7f8-1be9-410c-a536-28107c101ac5-kube-api-access-x85w6\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.917776 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data-custom\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.917837 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-public-tls-certs\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.917898 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-internal-tls-certs\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.918056 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.918094 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-combined-ca-bundle\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.932600 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-combined-ca-bundle\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.937648 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-public-tls-certs\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.941648 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data-custom\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.947867 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-internal-tls-certs\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.966287 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x85w6\" (UniqueName: \"kubernetes.io/projected/4316a7f8-1be9-410c-a536-28107c101ac5-kube-api-access-x85w6\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.966297 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data\") pod \"heat-cfnapi-655bcb7b5f-7cf7z\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.969638 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.984183 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-dvqn9"] Nov 21 13:56:57 crc kubenswrapper[4904]: I1121 13:56:57.987558 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.005510 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8fa3-account-create-z2zh6"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.031469 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdjkn\" (UniqueName: \"kubernetes.io/projected/2a8325f0-6c0c-4ae0-98b5-be6835297e21-kube-api-access-vdjkn\") pod \"nova-api-db-create-2wkx4\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.031609 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8325f0-6c0c-4ae0-98b5-be6835297e21-operator-scripts\") pod \"nova-api-db-create-2wkx4\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.031941 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.033287 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dvqn9"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.033374 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.036975 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.057778 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8fa3-account-create-z2zh6"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.114841 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.121951 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.135265 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9gq\" (UniqueName: \"kubernetes.io/projected/ae8afdd3-5682-4977-b883-b58fb1f25857-kube-api-access-wm9gq\") pod \"nova-cell0-db-create-dvqn9\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.135357 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19585c9c-688b-45f2-bc52-83be15f37165-operator-scripts\") pod \"nova-api-8fa3-account-create-z2zh6\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.135430 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdjkn\" (UniqueName: \"kubernetes.io/projected/2a8325f0-6c0c-4ae0-98b5-be6835297e21-kube-api-access-vdjkn\") pod \"nova-api-db-create-2wkx4\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.135458 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8dl\" (UniqueName: \"kubernetes.io/projected/19585c9c-688b-45f2-bc52-83be15f37165-kube-api-access-pn8dl\") pod \"nova-api-8fa3-account-create-z2zh6\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.135476 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8afdd3-5682-4977-b883-b58fb1f25857-operator-scripts\") pod \"nova-cell0-db-create-dvqn9\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.135578 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8325f0-6c0c-4ae0-98b5-be6835297e21-operator-scripts\") pod \"nova-api-db-create-2wkx4\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.136473 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8325f0-6c0c-4ae0-98b5-be6835297e21-operator-scripts\") pod \"nova-api-db-create-2wkx4\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.160208 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdjkn\" (UniqueName: \"kubernetes.io/projected/2a8325f0-6c0c-4ae0-98b5-be6835297e21-kube-api-access-vdjkn\") pod \"nova-api-db-create-2wkx4\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.175801 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4njzh"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.177507 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.197104 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4njzh"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.221229 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4569-account-create-kkf9c"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.223273 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.232275 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.238720 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm9gq\" (UniqueName: \"kubernetes.io/projected/ae8afdd3-5682-4977-b883-b58fb1f25857-kube-api-access-wm9gq\") pod \"nova-cell0-db-create-dvqn9\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.238836 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19585c9c-688b-45f2-bc52-83be15f37165-operator-scripts\") pod \"nova-api-8fa3-account-create-z2zh6\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.238921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn8dl\" (UniqueName: \"kubernetes.io/projected/19585c9c-688b-45f2-bc52-83be15f37165-kube-api-access-pn8dl\") pod \"nova-api-8fa3-account-create-z2zh6\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.239004 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8afdd3-5682-4977-b883-b58fb1f25857-operator-scripts\") pod \"nova-cell0-db-create-dvqn9\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.240088 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8afdd3-5682-4977-b883-b58fb1f25857-operator-scripts\") pod \"nova-cell0-db-create-dvqn9\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.240106 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19585c9c-688b-45f2-bc52-83be15f37165-operator-scripts\") pod \"nova-api-8fa3-account-create-z2zh6\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.253181 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4569-account-create-kkf9c"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.264135 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm9gq\" (UniqueName: \"kubernetes.io/projected/ae8afdd3-5682-4977-b883-b58fb1f25857-kube-api-access-wm9gq\") pod \"nova-cell0-db-create-dvqn9\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.274938 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn8dl\" (UniqueName: \"kubernetes.io/projected/19585c9c-688b-45f2-bc52-83be15f37165-kube-api-access-pn8dl\") pod \"nova-api-8fa3-account-create-z2zh6\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.347444 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22a1fb-12bb-4241-bb3c-8a659b96630b-operator-scripts\") pod \"nova-cell0-4569-account-create-kkf9c\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.347540 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-864kn\" (UniqueName: \"kubernetes.io/projected/5e22a1fb-12bb-4241-bb3c-8a659b96630b-kube-api-access-864kn\") pod \"nova-cell0-4569-account-create-kkf9c\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.347584 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-operator-scripts\") pod \"nova-cell1-db-create-4njzh\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.347630 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqfm\" (UniqueName: \"kubernetes.io/projected/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-kube-api-access-mlqfm\") pod \"nova-cell1-db-create-4njzh\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.367762 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-f88f-account-create-ljzdz"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.368759 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.372894 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.380886 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.390702 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.405937 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f88f-account-create-ljzdz"] Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.421752 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.450511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22a1fb-12bb-4241-bb3c-8a659b96630b-operator-scripts\") pod \"nova-cell0-4569-account-create-kkf9c\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.450594 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-864kn\" (UniqueName: \"kubernetes.io/projected/5e22a1fb-12bb-4241-bb3c-8a659b96630b-kube-api-access-864kn\") pod \"nova-cell0-4569-account-create-kkf9c\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.450632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-operator-scripts\") pod \"nova-cell1-db-create-4njzh\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.450700 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ae1c31-c9da-4ab8-ae06-1d287d556e56-operator-scripts\") pod \"nova-cell1-f88f-account-create-ljzdz\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.450728 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlqfm\" (UniqueName: \"kubernetes.io/projected/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-kube-api-access-mlqfm\") pod \"nova-cell1-db-create-4njzh\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.450773 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2twfs\" (UniqueName: \"kubernetes.io/projected/60ae1c31-c9da-4ab8-ae06-1d287d556e56-kube-api-access-2twfs\") pod \"nova-cell1-f88f-account-create-ljzdz\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.452251 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22a1fb-12bb-4241-bb3c-8a659b96630b-operator-scripts\") pod \"nova-cell0-4569-account-create-kkf9c\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.453181 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-operator-scripts\") pod \"nova-cell1-db-create-4njzh\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.477411 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlqfm\" (UniqueName: \"kubernetes.io/projected/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-kube-api-access-mlqfm\") pod \"nova-cell1-db-create-4njzh\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.483605 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-864kn\" (UniqueName: \"kubernetes.io/projected/5e22a1fb-12bb-4241-bb3c-8a659b96630b-kube-api-access-864kn\") pod \"nova-cell0-4569-account-create-kkf9c\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.553323 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ae1c31-c9da-4ab8-ae06-1d287d556e56-operator-scripts\") pod \"nova-cell1-f88f-account-create-ljzdz\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.553426 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2twfs\" (UniqueName: \"kubernetes.io/projected/60ae1c31-c9da-4ab8-ae06-1d287d556e56-kube-api-access-2twfs\") pod \"nova-cell1-f88f-account-create-ljzdz\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.554680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ae1c31-c9da-4ab8-ae06-1d287d556e56-operator-scripts\") pod \"nova-cell1-f88f-account-create-ljzdz\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.565369 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.573913 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2twfs\" (UniqueName: \"kubernetes.io/projected/60ae1c31-c9da-4ab8-ae06-1d287d556e56-kube-api-access-2twfs\") pod \"nova-cell1-f88f-account-create-ljzdz\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.576804 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:56:58 crc kubenswrapper[4904]: I1121 13:56:58.752880 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:57:01 crc kubenswrapper[4904]: I1121 13:57:01.258038 4904 scope.go:117] "RemoveContainer" containerID="ee30ae8fa3265288b5166789e0c11461d588634adaf461db976121e0a6307b02" Nov 21 13:57:01 crc kubenswrapper[4904]: I1121 13:57:01.889192 4904 scope.go:117] "RemoveContainer" containerID="66b3bf304018362ac30d2bd513ab705238711b9fd1f74a257fd35bdc11114c9d" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.251778 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.366188 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-catalog-content\") pod \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.366420 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-utilities\") pod \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.366462 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxh4\" (UniqueName: \"kubernetes.io/projected/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-kube-api-access-rnxh4\") pod \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\" (UID: \"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a\") " Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.369131 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-utilities" (OuterVolumeSpecName: "utilities") pod "713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" (UID: "713cdb85-aeb5-46c2-9fe0-bed76d06dc9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.391570 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-kube-api-access-rnxh4" (OuterVolumeSpecName: "kube-api-access-rnxh4") pod "713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" (UID: "713cdb85-aeb5-46c2-9fe0-bed76d06dc9a"). InnerVolumeSpecName "kube-api-access-rnxh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.470924 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.470966 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxh4\" (UniqueName: \"kubernetes.io/projected/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-kube-api-access-rnxh4\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.494799 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" (UID: "713cdb85-aeb5-46c2-9fe0-bed76d06dc9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.575498 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.950420 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn4kl" event={"ID":"713cdb85-aeb5-46c2-9fe0-bed76d06dc9a","Type":"ContainerDied","Data":"be4cf993fe47d829464f1a3e13f5906a46c7bbd1f63497605981dc36675d7a75"} Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.950985 4904 scope.go:117] "RemoveContainer" containerID="5217bf07d7d164f945a878a49a2fe9600164dacc19f671ff28ee86cd7fb52054" Nov 21 13:57:02 crc kubenswrapper[4904]: I1121 13:57:02.951149 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn4kl" Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.182653 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.213914 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4bgqc" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" probeResult="failure" output=< Nov 21 13:57:03 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 13:57:03 crc kubenswrapper[4904]: > Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.522771 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dn4kl"] Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.538338 4904 scope.go:117] "RemoveContainer" containerID="e56270ac1d05f84eed08e305496d561732d93a124ea0a6493c88ed1e37160315" Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.552747 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dn4kl"] Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.552825 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-69db589dbd-54tkv" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.192:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.562063 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f4785b448-tkwjh" podUID="5533e799-de81-4937-a2ca-8876e2bf3c22" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.193:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.576363 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.733924 4904 scope.go:117] "RemoveContainer" containerID="eaf6448e9789997bdd02812683f4225252f21df94020bd88e72671f5cbdd2453" Nov 21 13:57:03 crc kubenswrapper[4904]: I1121 13:57:03.869099 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.026027 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerStarted","Data":"ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921"} Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.041177 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f6485bcff-l7ccc" event={"ID":"9a6718b1-07c1-4270-a030-cc36bef71bbc","Type":"ContainerStarted","Data":"7e955deb623a8036a75425728b8fbce41dccc275fd66b955a09ec2a5239e47d2"} Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.137112 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" event={"ID":"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9","Type":"ContainerStarted","Data":"dd0355b0668e129c4a64c1d46c141a5af576d573c6cc94249e8312bf78c77353"} Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.137169 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.208023 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" podStartSLOduration=13.207999092 podStartE2EDuration="13.207999092s" podCreationTimestamp="2025-11-21 13:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:04.158163527 +0000 UTC m=+1498.279696079" watchObservedRunningTime="2025-11-21 13:57:04.207999092 +0000 UTC m=+1498.329531644" Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.568066 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" path="/var/lib/kubelet/pods/713cdb85-aeb5-46c2-9fe0-bed76d06dc9a/volumes" Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.784286 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f5858899-skqzp"] Nov 21 13:57:04 crc kubenswrapper[4904]: W1121 13:57:04.845988 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb6c970_c61d_48ee_9647_cde886a67028.slice/crio-e578bf97dc5430ed819197b48ee6b80ed77c1ccad62a1332c916151a81f261b0 WatchSource:0}: Error finding container e578bf97dc5430ed819197b48ee6b80ed77c1ccad62a1332c916151a81f261b0: Status 404 returned error can't find the container with id e578bf97dc5430ed819197b48ee6b80ed77c1ccad62a1332c916151a81f261b0 Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.846581 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-f99cf4f76-lnstx"] Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.869516 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-f88f-account-create-ljzdz"] Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.895813 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8fa3-account-create-z2zh6"] Nov 21 13:57:04 crc kubenswrapper[4904]: I1121 13:57:04.936230 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-dvqn9"] Nov 21 13:57:04 crc kubenswrapper[4904]: W1121 13:57:04.941987 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52194ea9_32ec_4f75_9365_37e63017872a.slice/crio-83bea8979f6a8bd56b2296b1465377107573316f1c61792ca78722fac62a9fd8 WatchSource:0}: Error finding container 83bea8979f6a8bd56b2296b1465377107573316f1c61792ca78722fac62a9fd8: Status 404 returned error can't find the container with id 83bea8979f6a8bd56b2296b1465377107573316f1c61792ca78722fac62a9fd8 Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.021345 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2wkx4"] Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.081095 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-667fdcb7f-d7lkr"] Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.094378 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-67b7b4f58b-np6xc"] Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.198110 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f5858899-skqzp" event={"ID":"1eb6c970-c61d-48ee-9647-cde886a67028","Type":"ContainerStarted","Data":"e578bf97dc5430ed819197b48ee6b80ed77c1ccad62a1332c916151a81f261b0"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.200291 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-f99cf4f76-lnstx" event={"ID":"9c8eecc3-a2e3-427e-af0e-adc6e08416ad","Type":"ContainerStarted","Data":"ec66aca68da23bc021ff29b7847472bf67d1e5830c092008ae1d8de2f73857c2"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.202986 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f88f-account-create-ljzdz" event={"ID":"60ae1c31-c9da-4ab8-ae06-1d287d556e56","Type":"ContainerStarted","Data":"15b1de37161ca13b73a072aedd7053bd7105605ce323f35fe854fd1af73680ef"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.223220 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4njzh"] Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.238335 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4569-account-create-kkf9c"] Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.273301 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a33906f-29de-4dea-bd13-a149e36b146c","Type":"ContainerStarted","Data":"24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.273867 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api-log" containerID="cri-o://52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4" gracePeriod=30 Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.274269 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api" containerID="cri-o://24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e" gracePeriod=30 Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.274283 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.284278 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-655bcb7b5f-7cf7z"] Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.290682 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" event={"ID":"51592b04-b9b0-4744-90c9-81b2f01af3ae","Type":"ContainerStarted","Data":"eb71714c5a05591f338f56708320e99164d405d8054411fffa4dbbc44bbd8b69"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.294755 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" podUID="51592b04-b9b0-4744-90c9-81b2f01af3ae" containerName="heat-cfnapi" containerID="cri-o://eb71714c5a05591f338f56708320e99164d405d8054411fffa4dbbc44bbd8b69" gracePeriod=60 Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.295141 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.312518 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=13.312487069 podStartE2EDuration="13.312487069s" podCreationTimestamp="2025-11-21 13:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:05.309414874 +0000 UTC m=+1499.430947436" watchObservedRunningTime="2025-11-21 13:57:05.312487069 +0000 UTC m=+1499.434019621" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.334492 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" podStartSLOduration=10.190165434 podStartE2EDuration="20.334456845s" podCreationTimestamp="2025-11-21 13:56:45 +0000 UTC" firstStartedPulling="2025-11-21 13:56:51.591564883 +0000 UTC m=+1485.713097425" lastFinishedPulling="2025-11-21 13:57:01.735856284 +0000 UTC m=+1495.857388836" observedRunningTime="2025-11-21 13:57:05.333335807 +0000 UTC m=+1499.454868369" watchObservedRunningTime="2025-11-21 13:57:05.334456845 +0000 UTC m=+1499.455989417" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.340132 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7f6485bcff-l7ccc" event={"ID":"9a6718b1-07c1-4270-a030-cc36bef71bbc","Type":"ContainerStarted","Data":"d7586591b289a0ed2fe859261a641328e6275f563b2589d35b941db517acda1b"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.362629 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2wkx4" event={"ID":"2a8325f0-6c0c-4ae0-98b5-be6835297e21","Type":"ContainerStarted","Data":"2bc1a86d27c5e6dbc3839aa7047c84700f87835533a4794f496349b46922730c"} Nov 21 13:57:05 crc kubenswrapper[4904]: W1121 13:57:05.390504 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4316a7f8_1be9_410c_a536_28107c101ac5.slice/crio-5d8929bab13474664f08aa6b6690fabd6354dd70e4fe47a31f84060d2ddddd7e WatchSource:0}: Error finding container 5d8929bab13474664f08aa6b6690fabd6354dd70e4fe47a31f84060d2ddddd7e: Status 404 returned error can't find the container with id 5d8929bab13474664f08aa6b6690fabd6354dd70e4fe47a31f84060d2ddddd7e Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.391988 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-868c98d758-4t8gx" podUID="569e06a9-99e0-475e-ac01-d44af882269f" containerName="heat-api" containerID="cri-o://7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c" gracePeriod=60 Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.392188 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868c98d758-4t8gx" event={"ID":"569e06a9-99e0-475e-ac01-d44af882269f","Type":"ContainerStarted","Data":"7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.392304 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.396101 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7f6485bcff-l7ccc" podStartSLOduration=17.278402306 podStartE2EDuration="26.396070456s" podCreationTimestamp="2025-11-21 13:56:39 +0000 UTC" firstStartedPulling="2025-11-21 13:56:52.384207894 +0000 UTC m=+1486.505740446" lastFinishedPulling="2025-11-21 13:57:01.501876044 +0000 UTC m=+1495.623408596" observedRunningTime="2025-11-21 13:57:05.375064174 +0000 UTC m=+1499.496596726" watchObservedRunningTime="2025-11-21 13:57:05.396070456 +0000 UTC m=+1499.517603008" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.404864 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" event={"ID":"52194ea9-32ec-4f75-9365-37e63017872a","Type":"ContainerStarted","Data":"83bea8979f6a8bd56b2296b1465377107573316f1c61792ca78722fac62a9fd8"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.426643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dvqn9" event={"ID":"ae8afdd3-5682-4977-b883-b58fb1f25857","Type":"ContainerStarted","Data":"7a713b532fefdc50264feb634b15af934d2014d436d35cc6b49e7983e82b7562"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.436723 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-868c98d758-4t8gx" podStartSLOduration=11.200262291 podStartE2EDuration="20.436701475s" podCreationTimestamp="2025-11-21 13:56:45 +0000 UTC" firstStartedPulling="2025-11-21 13:56:52.468710022 +0000 UTC m=+1486.590242564" lastFinishedPulling="2025-11-21 13:57:01.705149196 +0000 UTC m=+1495.826681748" observedRunningTime="2025-11-21 13:57:05.428949557 +0000 UTC m=+1499.550482109" watchObservedRunningTime="2025-11-21 13:57:05.436701475 +0000 UTC m=+1499.558234027" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.460738 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" event={"ID":"8b360c72-5d34-4e63-b653-3f3e80539384","Type":"ContainerStarted","Data":"39bba7c7960068176968cd5cecdaecf60b89ca8b19b89fc30ac47ca4d48d32dc"} Nov 21 13:57:05 crc kubenswrapper[4904]: W1121 13:57:05.465987 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e22a1fb_12bb_4241_bb3c_8a659b96630b.slice/crio-496048be17ab937c2a39ee6ee68ff9570b0609cc9f9731519762d0e3404d816f WatchSource:0}: Error finding container 496048be17ab937c2a39ee6ee68ff9570b0609cc9f9731519762d0e3404d816f: Status 404 returned error can't find the container with id 496048be17ab937c2a39ee6ee68ff9570b0609cc9f9731519762d0e3404d816f Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.467013 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67b7b4f58b-np6xc" event={"ID":"17e8035b-39b0-4410-805f-635ed96e3e46","Type":"ContainerStarted","Data":"20adc14a7ae5314f6eb3939472f077c96cbee62a652201f3f10d38f18eee09c9"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.488213 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8fa3-account-create-z2zh6" event={"ID":"19585c9c-688b-45f2-bc52-83be15f37165","Type":"ContainerStarted","Data":"deab975397dbcde2cbb9eec1ece47060a423c758e24b5485aae41a9fa949919d"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.500076 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" podStartSLOduration=17.212464479 podStartE2EDuration="26.500045368s" podCreationTimestamp="2025-11-21 13:56:39 +0000 UTC" firstStartedPulling="2025-11-21 13:56:52.339103095 +0000 UTC m=+1486.460635647" lastFinishedPulling="2025-11-21 13:57:01.626683984 +0000 UTC m=+1495.748216536" observedRunningTime="2025-11-21 13:57:05.482789938 +0000 UTC m=+1499.604322490" watchObservedRunningTime="2025-11-21 13:57:05.500045368 +0000 UTC m=+1499.621577920" Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.513369 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5788166-ebcd-434c-8684-3b2a5bbed6df","Type":"ContainerStarted","Data":"82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591"} Nov 21 13:57:05 crc kubenswrapper[4904]: I1121 13:57:05.561264 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=13.239446667 podStartE2EDuration="14.561236589s" podCreationTimestamp="2025-11-21 13:56:51 +0000 UTC" firstStartedPulling="2025-11-21 13:56:53.233527236 +0000 UTC m=+1487.355059788" lastFinishedPulling="2025-11-21 13:56:54.555317158 +0000 UTC m=+1488.676849710" observedRunningTime="2025-11-21 13:57:05.558114243 +0000 UTC m=+1499.679646805" watchObservedRunningTime="2025-11-21 13:57:05.561236589 +0000 UTC m=+1499.682769141" Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.054210 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.606983 4904 generic.go:334] "Generic (PLEG): container finished" podID="51592b04-b9b0-4744-90c9-81b2f01af3ae" containerID="eb71714c5a05591f338f56708320e99164d405d8054411fffa4dbbc44bbd8b69" exitCode=0 Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.607422 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" event={"ID":"51592b04-b9b0-4744-90c9-81b2f01af3ae","Type":"ContainerDied","Data":"eb71714c5a05591f338f56708320e99164d405d8054411fffa4dbbc44bbd8b69"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.642894 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4569-account-create-kkf9c" event={"ID":"5e22a1fb-12bb-4241-bb3c-8a659b96630b","Type":"ContainerStarted","Data":"496048be17ab937c2a39ee6ee68ff9570b0609cc9f9731519762d0e3404d816f"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.652803 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerStarted","Data":"5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.662904 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" event={"ID":"4316a7f8-1be9-410c-a536-28107c101ac5","Type":"ContainerStarted","Data":"5d8929bab13474664f08aa6b6690fabd6354dd70e4fe47a31f84060d2ddddd7e"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.685975 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4njzh" event={"ID":"b95d5901-0cb0-4e5f-82ab-be2364b11c5e","Type":"ContainerStarted","Data":"41a6cf193df0e3261fde2e55d116b5f3e05a8019d691611acd20f3b9e12df40b"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.707912 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75d77468c8-8lrt8" event={"ID":"8b360c72-5d34-4e63-b653-3f3e80539384","Type":"ContainerStarted","Data":"56eaf1b88a55a0f742707521b14a7a769addeb8dd1c4cd7d1c368b7ac6382df1"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.719589 4904 generic.go:334] "Generic (PLEG): container finished" podID="4a33906f-29de-4dea-bd13-a149e36b146c" containerID="52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4" exitCode=143 Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.719720 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a33906f-29de-4dea-bd13-a149e36b146c","Type":"ContainerDied","Data":"52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4"} Nov 21 13:57:06 crc kubenswrapper[4904]: I1121 13:57:06.742408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"db36aaca-d216-45b3-b8f1-f7a94bae89e6","Type":"ContainerStarted","Data":"664a725a33f6e03c6e3610f899eee44ac0af0fbd87d9d4f96e8a86317f916a9d"} Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.019810 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.839224228 podStartE2EDuration="34.019786053s" podCreationTimestamp="2025-11-21 13:56:33 +0000 UTC" firstStartedPulling="2025-11-21 13:56:34.031342201 +0000 UTC m=+1468.152874753" lastFinishedPulling="2025-11-21 13:57:04.211904026 +0000 UTC m=+1498.333436578" observedRunningTime="2025-11-21 13:57:06.925742672 +0000 UTC m=+1501.047275224" watchObservedRunningTime="2025-11-21 13:57:07.019786053 +0000 UTC m=+1501.141318605" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.304979 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.316236 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.198:8080/\": dial tcp 10.217.0.198:8080: connect: connection refused" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.658055 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.664112 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.670781 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-combined-ca-bundle\") pod \"569e06a9-99e0-475e-ac01-d44af882269f\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.670839 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data-custom\") pod \"51592b04-b9b0-4744-90c9-81b2f01af3ae\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.670870 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data\") pod \"51592b04-b9b0-4744-90c9-81b2f01af3ae\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.670931 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-combined-ca-bundle\") pod \"51592b04-b9b0-4744-90c9-81b2f01af3ae\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.670984 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z9jl\" (UniqueName: \"kubernetes.io/projected/569e06a9-99e0-475e-ac01-d44af882269f-kube-api-access-4z9jl\") pod \"569e06a9-99e0-475e-ac01-d44af882269f\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.671033 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nvdv\" (UniqueName: \"kubernetes.io/projected/51592b04-b9b0-4744-90c9-81b2f01af3ae-kube-api-access-6nvdv\") pod \"51592b04-b9b0-4744-90c9-81b2f01af3ae\" (UID: \"51592b04-b9b0-4744-90c9-81b2f01af3ae\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.671092 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data-custom\") pod \"569e06a9-99e0-475e-ac01-d44af882269f\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.671118 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data\") pod \"569e06a9-99e0-475e-ac01-d44af882269f\" (UID: \"569e06a9-99e0-475e-ac01-d44af882269f\") " Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.732220 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "569e06a9-99e0-475e-ac01-d44af882269f" (UID: "569e06a9-99e0-475e-ac01-d44af882269f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.733116 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51592b04-b9b0-4744-90c9-81b2f01af3ae-kube-api-access-6nvdv" (OuterVolumeSpecName: "kube-api-access-6nvdv") pod "51592b04-b9b0-4744-90c9-81b2f01af3ae" (UID: "51592b04-b9b0-4744-90c9-81b2f01af3ae"). InnerVolumeSpecName "kube-api-access-6nvdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.734630 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569e06a9-99e0-475e-ac01-d44af882269f-kube-api-access-4z9jl" (OuterVolumeSpecName: "kube-api-access-4z9jl") pod "569e06a9-99e0-475e-ac01-d44af882269f" (UID: "569e06a9-99e0-475e-ac01-d44af882269f"). InnerVolumeSpecName "kube-api-access-4z9jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.754354 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51592b04-b9b0-4744-90c9-81b2f01af3ae" (UID: "51592b04-b9b0-4744-90c9-81b2f01af3ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.777632 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.777681 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z9jl\" (UniqueName: \"kubernetes.io/projected/569e06a9-99e0-475e-ac01-d44af882269f-kube-api-access-4z9jl\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.777693 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nvdv\" (UniqueName: \"kubernetes.io/projected/51592b04-b9b0-4744-90c9-81b2f01af3ae-kube-api-access-6nvdv\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.777702 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.868001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67b7b4f58b-np6xc" event={"ID":"17e8035b-39b0-4410-805f-635ed96e3e46","Type":"ContainerStarted","Data":"e84b41adf000ee575d1c1e8d841c01ba15be0896116811d265227a22b5cec2d5"} Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.870038 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.877698 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "569e06a9-99e0-475e-ac01-d44af882269f" (UID: "569e06a9-99e0-475e-ac01-d44af882269f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.881512 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.902213 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" event={"ID":"4316a7f8-1be9-410c-a536-28107c101ac5","Type":"ContainerStarted","Data":"f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6"} Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.904015 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.915347 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51592b04-b9b0-4744-90c9-81b2f01af3ae" (UID: "51592b04-b9b0-4744-90c9-81b2f01af3ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.931257 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-67b7b4f58b-np6xc" podStartSLOduration=10.931228049 podStartE2EDuration="10.931228049s" podCreationTimestamp="2025-11-21 13:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:07.894399472 +0000 UTC m=+1502.015932024" watchObservedRunningTime="2025-11-21 13:57:07.931228049 +0000 UTC m=+1502.052760601" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.957866 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" podStartSLOduration=10.957840657 podStartE2EDuration="10.957840657s" podCreationTimestamp="2025-11-21 13:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:07.938789312 +0000 UTC m=+1502.060321864" watchObservedRunningTime="2025-11-21 13:57:07.957840657 +0000 UTC m=+1502.079373209" Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.960813 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4njzh" event={"ID":"b95d5901-0cb0-4e5f-82ab-be2364b11c5e","Type":"ContainerStarted","Data":"552cae89d7343a633d6ec565dc2b95e53b3aad993589a3df2126af186255c0d1"} Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.986146 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f5858899-skqzp" event={"ID":"1eb6c970-c61d-48ee-9647-cde886a67028","Type":"ContainerStarted","Data":"e2b8fb17eebef7149f3e361cc05387c79feb832e29678cc424a47281a7985e29"} Nov 21 13:57:07 crc kubenswrapper[4904]: I1121 13:57:07.986544 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.002782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f88f-account-create-ljzdz" event={"ID":"60ae1c31-c9da-4ab8-ae06-1d287d556e56","Type":"ContainerStarted","Data":"352cf467b470ed1d24d04fcae3346c00819fb465468e8dee548f4b08d1a83292"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.008382 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.032941 4904 generic.go:334] "Generic (PLEG): container finished" podID="569e06a9-99e0-475e-ac01-d44af882269f" containerID="7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c" exitCode=0 Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.033059 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-868c98d758-4t8gx" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.033793 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868c98d758-4t8gx" event={"ID":"569e06a9-99e0-475e-ac01-d44af882269f","Type":"ContainerDied","Data":"7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.033821 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-868c98d758-4t8gx" event={"ID":"569e06a9-99e0-475e-ac01-d44af882269f","Type":"ContainerDied","Data":"62507ccc0a7100daf9c0d7bca586bc957d2794b663b84d993756b5fe0377bf62"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.033843 4904 scope.go:117] "RemoveContainer" containerID="7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.044785 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data" (OuterVolumeSpecName: "config-data") pod "569e06a9-99e0-475e-ac01-d44af882269f" (UID: "569e06a9-99e0-475e-ac01-d44af882269f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.053047 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data" (OuterVolumeSpecName: "config-data") pod "51592b04-b9b0-4744-90c9-81b2f01af3ae" (UID: "51592b04-b9b0-4744-90c9-81b2f01af3ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.076360 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.076785 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-796ccc96c5-hwn84" event={"ID":"51592b04-b9b0-4744-90c9-81b2f01af3ae","Type":"ContainerDied","Data":"ad325c93fc9ee7dd3eeff0e7d8174f157218c11027704dc7351bf2186e08c15c"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.142617 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-4njzh" podStartSLOduration=10.142577538 podStartE2EDuration="10.142577538s" podCreationTimestamp="2025-11-21 13:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:07.987157191 +0000 UTC m=+1502.108689743" watchObservedRunningTime="2025-11-21 13:57:08.142577538 +0000 UTC m=+1502.264110100" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.143612 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-f99cf4f76-lnstx" event={"ID":"9c8eecc3-a2e3-427e-af0e-adc6e08416ad","Type":"ContainerStarted","Data":"1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.143918 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.148122 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4569-account-create-kkf9c" event={"ID":"5e22a1fb-12bb-4241-bb3c-8a659b96630b","Type":"ContainerStarted","Data":"bb9939d32bb444c076034ef85a7c1986f8b67a8f7a40495d009f6a31dfd19193"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.156104 4904 generic.go:334] "Generic (PLEG): container finished" podID="ae8afdd3-5682-4977-b883-b58fb1f25857" containerID="ca7af29aef96e3d1553280ad437c7650f3048e7269273edad710154ceaf879fb" exitCode=0 Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.156520 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dvqn9" event={"ID":"ae8afdd3-5682-4977-b883-b58fb1f25857","Type":"ContainerDied","Data":"ca7af29aef96e3d1553280ad437c7650f3048e7269273edad710154ceaf879fb"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.181183 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2wkx4" event={"ID":"2a8325f0-6c0c-4ae0-98b5-be6835297e21","Type":"ContainerStarted","Data":"21fcee51c5fe36cec2c9f8021ad75caab1b04e83dc55f599a6e326cf98eedc5e"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.187117 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569e06a9-99e0-475e-ac01-d44af882269f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.187186 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51592b04-b9b0-4744-90c9-81b2f01af3ae-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.223922 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" event={"ID":"52194ea9-32ec-4f75-9365-37e63017872a","Type":"ContainerStarted","Data":"c9cbe753181336b13d8ff919d0c34686bf66e9a65d982d373679a0c3723baafa"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.228765 4904 scope.go:117] "RemoveContainer" containerID="c9cbe753181336b13d8ff919d0c34686bf66e9a65d982d373679a0c3723baafa" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.245448 4904 generic.go:334] "Generic (PLEG): container finished" podID="19585c9c-688b-45f2-bc52-83be15f37165" containerID="5f4858f17160180eb95a8561d47f3cc093db3518d06fe831b442671d897d6fa8" exitCode=0 Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.245932 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8fa3-account-create-z2zh6" event={"ID":"19585c9c-688b-45f2-bc52-83be15f37165","Type":"ContainerDied","Data":"5f4858f17160180eb95a8561d47f3cc093db3518d06fe831b442671d897d6fa8"} Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.255938 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f5858899-skqzp" podStartSLOduration=13.255915249 podStartE2EDuration="13.255915249s" podCreationTimestamp="2025-11-21 13:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:08.022117833 +0000 UTC m=+1502.143650405" watchObservedRunningTime="2025-11-21 13:57:08.255915249 +0000 UTC m=+1502.377447801" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.300547 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-f88f-account-create-ljzdz" podStartSLOduration=10.300516476 podStartE2EDuration="10.300516476s" podCreationTimestamp="2025-11-21 13:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:08.048001663 +0000 UTC m=+1502.169534215" watchObservedRunningTime="2025-11-21 13:57:08.300516476 +0000 UTC m=+1502.422049038" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.346855 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-f99cf4f76-lnstx" podStartSLOduration=13.346829674 podStartE2EDuration="13.346829674s" podCreationTimestamp="2025-11-21 13:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:08.173277146 +0000 UTC m=+1502.294809728" watchObservedRunningTime="2025-11-21 13:57:08.346829674 +0000 UTC m=+1502.468362226" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.368598 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4569-account-create-kkf9c" podStartSLOduration=10.368574883 podStartE2EDuration="10.368574883s" podCreationTimestamp="2025-11-21 13:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:08.202356743 +0000 UTC m=+1502.323889305" watchObservedRunningTime="2025-11-21 13:57:08.368574883 +0000 UTC m=+1502.490107435" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.416489 4904 scope.go:117] "RemoveContainer" containerID="7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.418013 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5f4785b448-tkwjh" podUID="5533e799-de81-4937-a2ca-8876e2bf3c22" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.193:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:08 crc kubenswrapper[4904]: E1121 13:57:08.423252 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c\": container with ID starting with 7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c not found: ID does not exist" containerID="7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.423401 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c"} err="failed to get container status \"7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c\": rpc error: code = NotFound desc = could not find container \"7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c\": container with ID starting with 7ce4bed0b54d4566b005c49c5dfb429dc5d595a0460ecfab32af9ffc8c0b101c not found: ID does not exist" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.423513 4904 scope.go:117] "RemoveContainer" containerID="eb71714c5a05591f338f56708320e99164d405d8054411fffa4dbbc44bbd8b69" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.551633 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-868c98d758-4t8gx"] Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.561343 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-868c98d758-4t8gx"] Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.592710 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-796ccc96c5-hwn84"] Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.608026 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-796ccc96c5-hwn84"] Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.608352 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-69db589dbd-54tkv" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.192:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.608392 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f4785b448-tkwjh" podUID="5533e799-de81-4937-a2ca-8876e2bf3c22" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.193:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.608427 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f4785b448-tkwjh" podUID="5533e799-de81-4937-a2ca-8876e2bf3c22" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.193:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.626933 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.676336 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f4785b448-tkwjh" Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.816342 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69db589dbd-54tkv"] Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.816623 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69db589dbd-54tkv" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" containerID="cri-o://a84c619f0ce01e59cef47d5478b7c57bc6cd6b9594e08f81bd6b31cd96f87dd0" gracePeriod=30 Nov 21 13:57:08 crc kubenswrapper[4904]: I1121 13:57:08.822056 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-69db589dbd-54tkv" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api" containerID="cri-o://5f74120aee934e9c71b803ec0b7c39d054a2f274d89519e725064c2971707325" gracePeriod=30 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.264451 4904 generic.go:334] "Generic (PLEG): container finished" podID="5e22a1fb-12bb-4241-bb3c-8a659b96630b" containerID="bb9939d32bb444c076034ef85a7c1986f8b67a8f7a40495d009f6a31dfd19193" exitCode=0 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.264792 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4569-account-create-kkf9c" event={"ID":"5e22a1fb-12bb-4241-bb3c-8a659b96630b","Type":"ContainerDied","Data":"bb9939d32bb444c076034ef85a7c1986f8b67a8f7a40495d009f6a31dfd19193"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.268485 4904 generic.go:334] "Generic (PLEG): container finished" podID="64f7518f-8e62-4d48-990c-1955072e5e98" containerID="a84c619f0ce01e59cef47d5478b7c57bc6cd6b9594e08f81bd6b31cd96f87dd0" exitCode=143 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.268525 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69db589dbd-54tkv" event={"ID":"64f7518f-8e62-4d48-990c-1955072e5e98","Type":"ContainerDied","Data":"a84c619f0ce01e59cef47d5478b7c57bc6cd6b9594e08f81bd6b31cd96f87dd0"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.274021 4904 generic.go:334] "Generic (PLEG): container finished" podID="52194ea9-32ec-4f75-9365-37e63017872a" containerID="c9cbe753181336b13d8ff919d0c34686bf66e9a65d982d373679a0c3723baafa" exitCode=1 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.274086 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" event={"ID":"52194ea9-32ec-4f75-9365-37e63017872a","Type":"ContainerDied","Data":"c9cbe753181336b13d8ff919d0c34686bf66e9a65d982d373679a0c3723baafa"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.274125 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" event={"ID":"52194ea9-32ec-4f75-9365-37e63017872a","Type":"ContainerStarted","Data":"3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.275235 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.286782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerStarted","Data":"bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.286899 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-central-agent" containerID="cri-o://7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d" gracePeriod=30 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.286973 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-notification-agent" containerID="cri-o://ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921" gracePeriod=30 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.286978 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="proxy-httpd" containerID="cri-o://bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b" gracePeriod=30 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.287069 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="sg-core" containerID="cri-o://5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34" gracePeriod=30 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.286919 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.292034 4904 generic.go:334] "Generic (PLEG): container finished" podID="b95d5901-0cb0-4e5f-82ab-be2364b11c5e" containerID="552cae89d7343a633d6ec565dc2b95e53b3aad993589a3df2126af186255c0d1" exitCode=0 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.292179 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4njzh" event={"ID":"b95d5901-0cb0-4e5f-82ab-be2364b11c5e","Type":"ContainerDied","Data":"552cae89d7343a633d6ec565dc2b95e53b3aad993589a3df2126af186255c0d1"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.300416 4904 generic.go:334] "Generic (PLEG): container finished" podID="2a8325f0-6c0c-4ae0-98b5-be6835297e21" containerID="21fcee51c5fe36cec2c9f8021ad75caab1b04e83dc55f599a6e326cf98eedc5e" exitCode=0 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.300470 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2wkx4" event={"ID":"2a8325f0-6c0c-4ae0-98b5-be6835297e21","Type":"ContainerDied","Data":"21fcee51c5fe36cec2c9f8021ad75caab1b04e83dc55f599a6e326cf98eedc5e"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.312390 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" podStartSLOduration=14.312362496 podStartE2EDuration="14.312362496s" podCreationTimestamp="2025-11-21 13:56:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:09.305491739 +0000 UTC m=+1503.427024301" watchObservedRunningTime="2025-11-21 13:57:09.312362496 +0000 UTC m=+1503.433895048" Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.314705 4904 generic.go:334] "Generic (PLEG): container finished" podID="1eb6c970-c61d-48ee-9647-cde886a67028" containerID="e2b8fb17eebef7149f3e361cc05387c79feb832e29678cc424a47281a7985e29" exitCode=1 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.315229 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f5858899-skqzp" event={"ID":"1eb6c970-c61d-48ee-9647-cde886a67028","Type":"ContainerDied","Data":"e2b8fb17eebef7149f3e361cc05387c79feb832e29678cc424a47281a7985e29"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.315922 4904 scope.go:117] "RemoveContainer" containerID="e2b8fb17eebef7149f3e361cc05387c79feb832e29678cc424a47281a7985e29" Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.323072 4904 generic.go:334] "Generic (PLEG): container finished" podID="60ae1c31-c9da-4ab8-ae06-1d287d556e56" containerID="352cf467b470ed1d24d04fcae3346c00819fb465468e8dee548f4b08d1a83292" exitCode=0 Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.323162 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f88f-account-create-ljzdz" event={"ID":"60ae1c31-c9da-4ab8-ae06-1d287d556e56","Type":"ContainerDied","Data":"352cf467b470ed1d24d04fcae3346c00819fb465468e8dee548f4b08d1a83292"} Nov 21 13:57:09 crc kubenswrapper[4904]: I1121 13:57:09.395232 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.730824074 podStartE2EDuration="17.395199004s" podCreationTimestamp="2025-11-21 13:56:52 +0000 UTC" firstStartedPulling="2025-11-21 13:56:54.409528645 +0000 UTC m=+1488.531061197" lastFinishedPulling="2025-11-21 13:57:08.073903575 +0000 UTC m=+1502.195436127" observedRunningTime="2025-11-21 13:57:09.367409368 +0000 UTC m=+1503.488941920" watchObservedRunningTime="2025-11-21 13:57:09.395199004 +0000 UTC m=+1503.516731556" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.307747 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.319209 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.321455 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.367293 4904 generic.go:334] "Generic (PLEG): container finished" podID="1eb6c970-c61d-48ee-9647-cde886a67028" containerID="8808d894d1f3465ac36d91eff08b453c5c43038306177b9567623bb89202274a" exitCode=1 Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.367380 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f5858899-skqzp" event={"ID":"1eb6c970-c61d-48ee-9647-cde886a67028","Type":"ContainerDied","Data":"8808d894d1f3465ac36d91eff08b453c5c43038306177b9567623bb89202274a"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.367430 4904 scope.go:117] "RemoveContainer" containerID="e2b8fb17eebef7149f3e361cc05387c79feb832e29678cc424a47281a7985e29" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.368335 4904 scope.go:117] "RemoveContainer" containerID="8808d894d1f3465ac36d91eff08b453c5c43038306177b9567623bb89202274a" Nov 21 13:57:10 crc kubenswrapper[4904]: E1121 13:57:10.368767 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6f5858899-skqzp_openstack(1eb6c970-c61d-48ee-9647-cde886a67028)\"" pod="openstack/heat-api-6f5858899-skqzp" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.371521 4904 generic.go:334] "Generic (PLEG): container finished" podID="52194ea9-32ec-4f75-9365-37e63017872a" containerID="3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c" exitCode=1 Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.371566 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" event={"ID":"52194ea9-32ec-4f75-9365-37e63017872a","Type":"ContainerDied","Data":"3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.372347 4904 scope.go:117] "RemoveContainer" containerID="3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c" Nov 21 13:57:10 crc kubenswrapper[4904]: E1121 13:57:10.372591 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-667fdcb7f-d7lkr_openstack(52194ea9-32ec-4f75-9365-37e63017872a)\"" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" podUID="52194ea9-32ec-4f75-9365-37e63017872a" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.382510 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8fa3-account-create-z2zh6" event={"ID":"19585c9c-688b-45f2-bc52-83be15f37165","Type":"ContainerDied","Data":"deab975397dbcde2cbb9eec1ece47060a423c758e24b5485aae41a9fa949919d"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.382561 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deab975397dbcde2cbb9eec1ece47060a423c758e24b5485aae41a9fa949919d" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.382647 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8fa3-account-create-z2zh6" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.387828 4904 generic.go:334] "Generic (PLEG): container finished" podID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerID="5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34" exitCode=2 Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.387861 4904 generic.go:334] "Generic (PLEG): container finished" podID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerID="ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921" exitCode=0 Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.387870 4904 generic.go:334] "Generic (PLEG): container finished" podID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerID="7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d" exitCode=0 Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.387915 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerDied","Data":"5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.387951 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerDied","Data":"ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.387962 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerDied","Data":"7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.389892 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-dvqn9" event={"ID":"ae8afdd3-5682-4977-b883-b58fb1f25857","Type":"ContainerDied","Data":"7a713b532fefdc50264feb634b15af934d2014d436d35cc6b49e7983e82b7562"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.389920 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a713b532fefdc50264feb634b15af934d2014d436d35cc6b49e7983e82b7562" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.389943 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-dvqn9" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.392212 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2wkx4" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.392991 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2wkx4" event={"ID":"2a8325f0-6c0c-4ae0-98b5-be6835297e21","Type":"ContainerDied","Data":"2bc1a86d27c5e6dbc3839aa7047c84700f87835533a4794f496349b46922730c"} Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.393019 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bc1a86d27c5e6dbc3839aa7047c84700f87835533a4794f496349b46922730c" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.432727 4904 scope.go:117] "RemoveContainer" containerID="c9cbe753181336b13d8ff919d0c34686bf66e9a65d982d373679a0c3723baafa" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.442648 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn8dl\" (UniqueName: \"kubernetes.io/projected/19585c9c-688b-45f2-bc52-83be15f37165-kube-api-access-pn8dl\") pod \"19585c9c-688b-45f2-bc52-83be15f37165\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.442726 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm9gq\" (UniqueName: \"kubernetes.io/projected/ae8afdd3-5682-4977-b883-b58fb1f25857-kube-api-access-wm9gq\") pod \"ae8afdd3-5682-4977-b883-b58fb1f25857\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.442756 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8afdd3-5682-4977-b883-b58fb1f25857-operator-scripts\") pod \"ae8afdd3-5682-4977-b883-b58fb1f25857\" (UID: \"ae8afdd3-5682-4977-b883-b58fb1f25857\") " Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.442777 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdjkn\" (UniqueName: \"kubernetes.io/projected/2a8325f0-6c0c-4ae0-98b5-be6835297e21-kube-api-access-vdjkn\") pod \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.442821 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19585c9c-688b-45f2-bc52-83be15f37165-operator-scripts\") pod \"19585c9c-688b-45f2-bc52-83be15f37165\" (UID: \"19585c9c-688b-45f2-bc52-83be15f37165\") " Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.442858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8325f0-6c0c-4ae0-98b5-be6835297e21-operator-scripts\") pod \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\" (UID: \"2a8325f0-6c0c-4ae0-98b5-be6835297e21\") " Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.447322 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a8325f0-6c0c-4ae0-98b5-be6835297e21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a8325f0-6c0c-4ae0-98b5-be6835297e21" (UID: "2a8325f0-6c0c-4ae0-98b5-be6835297e21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.447739 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19585c9c-688b-45f2-bc52-83be15f37165-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19585c9c-688b-45f2-bc52-83be15f37165" (UID: "19585c9c-688b-45f2-bc52-83be15f37165"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.448094 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8afdd3-5682-4977-b883-b58fb1f25857-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae8afdd3-5682-4977-b883-b58fb1f25857" (UID: "ae8afdd3-5682-4977-b883-b58fb1f25857"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.452835 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a8325f0-6c0c-4ae0-98b5-be6835297e21-kube-api-access-vdjkn" (OuterVolumeSpecName: "kube-api-access-vdjkn") pod "2a8325f0-6c0c-4ae0-98b5-be6835297e21" (UID: "2a8325f0-6c0c-4ae0-98b5-be6835297e21"). InnerVolumeSpecName "kube-api-access-vdjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.453908 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8afdd3-5682-4977-b883-b58fb1f25857-kube-api-access-wm9gq" (OuterVolumeSpecName: "kube-api-access-wm9gq") pod "ae8afdd3-5682-4977-b883-b58fb1f25857" (UID: "ae8afdd3-5682-4977-b883-b58fb1f25857"). InnerVolumeSpecName "kube-api-access-wm9gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.463592 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19585c9c-688b-45f2-bc52-83be15f37165-kube-api-access-pn8dl" (OuterVolumeSpecName: "kube-api-access-pn8dl") pod "19585c9c-688b-45f2-bc52-83be15f37165" (UID: "19585c9c-688b-45f2-bc52-83be15f37165"). InnerVolumeSpecName "kube-api-access-pn8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.530500 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51592b04-b9b0-4744-90c9-81b2f01af3ae" path="/var/lib/kubelet/pods/51592b04-b9b0-4744-90c9-81b2f01af3ae/volumes" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.531526 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569e06a9-99e0-475e-ac01-d44af882269f" path="/var/lib/kubelet/pods/569e06a9-99e0-475e-ac01-d44af882269f/volumes" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.546669 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19585c9c-688b-45f2-bc52-83be15f37165-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.546706 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a8325f0-6c0c-4ae0-98b5-be6835297e21-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.546717 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn8dl\" (UniqueName: \"kubernetes.io/projected/19585c9c-688b-45f2-bc52-83be15f37165-kube-api-access-pn8dl\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.546730 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm9gq\" (UniqueName: \"kubernetes.io/projected/ae8afdd3-5682-4977-b883-b58fb1f25857-kube-api-access-wm9gq\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.546741 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8afdd3-5682-4977-b883-b58fb1f25857-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.546751 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdjkn\" (UniqueName: \"kubernetes.io/projected/2a8325f0-6c0c-4ae0-98b5-be6835297e21-kube-api-access-vdjkn\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:10 crc kubenswrapper[4904]: I1121 13:57:10.972577 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.033049 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.033125 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.036554 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.167122 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-operator-scripts\") pod \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.168176 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b95d5901-0cb0-4e5f-82ab-be2364b11c5e" (UID: "b95d5901-0cb0-4e5f-82ab-be2364b11c5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.172259 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlqfm\" (UniqueName: \"kubernetes.io/projected/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-kube-api-access-mlqfm\") pod \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\" (UID: \"b95d5901-0cb0-4e5f-82ab-be2364b11c5e\") " Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.174376 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.184964 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-kube-api-access-mlqfm" (OuterVolumeSpecName: "kube-api-access-mlqfm") pod "b95d5901-0cb0-4e5f-82ab-be2364b11c5e" (UID: "b95d5901-0cb0-4e5f-82ab-be2364b11c5e"). InnerVolumeSpecName "kube-api-access-mlqfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.248364 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.255414 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.277125 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ae1c31-c9da-4ab8-ae06-1d287d556e56-operator-scripts\") pod \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.277493 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-864kn\" (UniqueName: \"kubernetes.io/projected/5e22a1fb-12bb-4241-bb3c-8a659b96630b-kube-api-access-864kn\") pod \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.277795 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22a1fb-12bb-4241-bb3c-8a659b96630b-operator-scripts\") pod \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\" (UID: \"5e22a1fb-12bb-4241-bb3c-8a659b96630b\") " Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.277881 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2twfs\" (UniqueName: \"kubernetes.io/projected/60ae1c31-c9da-4ab8-ae06-1d287d556e56-kube-api-access-2twfs\") pod \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\" (UID: \"60ae1c31-c9da-4ab8-ae06-1d287d556e56\") " Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.277917 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ae1c31-c9da-4ab8-ae06-1d287d556e56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ae1c31-c9da-4ab8-ae06-1d287d556e56" (UID: "60ae1c31-c9da-4ab8-ae06-1d287d556e56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.278769 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlqfm\" (UniqueName: \"kubernetes.io/projected/b95d5901-0cb0-4e5f-82ab-be2364b11c5e-kube-api-access-mlqfm\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.278790 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ae1c31-c9da-4ab8-ae06-1d287d556e56-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.279002 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e22a1fb-12bb-4241-bb3c-8a659b96630b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e22a1fb-12bb-4241-bb3c-8a659b96630b" (UID: "5e22a1fb-12bb-4241-bb3c-8a659b96630b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.295460 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e22a1fb-12bb-4241-bb3c-8a659b96630b-kube-api-access-864kn" (OuterVolumeSpecName: "kube-api-access-864kn") pod "5e22a1fb-12bb-4241-bb3c-8a659b96630b" (UID: "5e22a1fb-12bb-4241-bb3c-8a659b96630b"). InnerVolumeSpecName "kube-api-access-864kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.295532 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ae1c31-c9da-4ab8-ae06-1d287d556e56-kube-api-access-2twfs" (OuterVolumeSpecName: "kube-api-access-2twfs") pod "60ae1c31-c9da-4ab8-ae06-1d287d556e56" (UID: "60ae1c31-c9da-4ab8-ae06-1d287d556e56"). InnerVolumeSpecName "kube-api-access-2twfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.380083 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-864kn\" (UniqueName: \"kubernetes.io/projected/5e22a1fb-12bb-4241-bb3c-8a659b96630b-kube-api-access-864kn\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.380114 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e22a1fb-12bb-4241-bb3c-8a659b96630b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.380123 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2twfs\" (UniqueName: \"kubernetes.io/projected/60ae1c31-c9da-4ab8-ae06-1d287d556e56-kube-api-access-2twfs\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.555066 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-f88f-account-create-ljzdz" event={"ID":"60ae1c31-c9da-4ab8-ae06-1d287d556e56","Type":"ContainerDied","Data":"15b1de37161ca13b73a072aedd7053bd7105605ce323f35fe854fd1af73680ef"} Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.555122 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15b1de37161ca13b73a072aedd7053bd7105605ce323f35fe854fd1af73680ef" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.555277 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-f88f-account-create-ljzdz" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.577149 4904 scope.go:117] "RemoveContainer" containerID="3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c" Nov 21 13:57:11 crc kubenswrapper[4904]: E1121 13:57:11.577735 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-667fdcb7f-d7lkr_openstack(52194ea9-32ec-4f75-9365-37e63017872a)\"" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" podUID="52194ea9-32ec-4f75-9365-37e63017872a" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.591763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4569-account-create-kkf9c" event={"ID":"5e22a1fb-12bb-4241-bb3c-8a659b96630b","Type":"ContainerDied","Data":"496048be17ab937c2a39ee6ee68ff9570b0609cc9f9731519762d0e3404d816f"} Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.591808 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496048be17ab937c2a39ee6ee68ff9570b0609cc9f9731519762d0e3404d816f" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.591881 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4569-account-create-kkf9c" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.621933 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4njzh" event={"ID":"b95d5901-0cb0-4e5f-82ab-be2364b11c5e","Type":"ContainerDied","Data":"41a6cf193df0e3261fde2e55d116b5f3e05a8019d691611acd20f3b9e12df40b"} Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.621981 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a6cf193df0e3261fde2e55d116b5f3e05a8019d691611acd20f3b9e12df40b" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.622026 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4njzh" Nov 21 13:57:11 crc kubenswrapper[4904]: I1121 13:57:11.639729 4904 scope.go:117] "RemoveContainer" containerID="8808d894d1f3465ac36d91eff08b453c5c43038306177b9567623bb89202274a" Nov 21 13:57:11 crc kubenswrapper[4904]: E1121 13:57:11.643093 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6f5858899-skqzp_openstack(1eb6c970-c61d-48ee-9647-cde886a67028)\"" pod="openstack/heat-api-6f5858899-skqzp" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.146959 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.214370 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.404408 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bgqc"] Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.567805 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.580360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.668957 4904 scope.go:117] "RemoveContainer" containerID="8808d894d1f3465ac36d91eff08b453c5c43038306177b9567623bb89202274a" Nov 21 13:57:12 crc kubenswrapper[4904]: E1121 13:57:12.669274 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6f5858899-skqzp_openstack(1eb6c970-c61d-48ee-9647-cde886a67028)\"" pod="openstack/heat-api-6f5858899-skqzp" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.684496 4904 scope.go:117] "RemoveContainer" containerID="3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c" Nov 21 13:57:12 crc kubenswrapper[4904]: E1121 13:57:12.685228 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-667fdcb7f-d7lkr_openstack(52194ea9-32ec-4f75-9365-37e63017872a)\"" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" podUID="52194ea9-32ec-4f75-9365-37e63017872a" Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.686368 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-rjx2d"] Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.686780 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerName="dnsmasq-dns" containerID="cri-o://361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97" gracePeriod=10 Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.706966 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.707279 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="cinder-scheduler" containerID="cri-o://9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170" gracePeriod=30 Nov 21 13:57:12 crc kubenswrapper[4904]: I1121 13:57:12.708043 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="probe" containerID="cri-o://82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591" gracePeriod=30 Nov 21 13:57:12 crc kubenswrapper[4904]: E1121 13:57:12.767365 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0acfb4a5_5122_4f88_885f_7e255f82c2a1.slice/crio-361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97.scope\": RecentStats: unable to find data in memory cache]" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.476895 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.479923 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69db589dbd-54tkv" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.192:9311/healthcheck\": read tcp 10.217.0.2:33866->10.217.0.192:9311: read: connection reset by peer" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.480236 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-69db589dbd-54tkv" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.192:9311/healthcheck\": read tcp 10.217.0.2:33878->10.217.0.192:9311: read: connection reset by peer" Nov 21 13:57:13 crc kubenswrapper[4904]: W1121 13:57:13.548956 4904 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64f7518f_8e62_4d48_990c_1955072e5e98.slice/crio-8e390e725041890a275775b17f9c04bfc6c2e8d710c59beeaba4ec822cb47b80": error while statting cgroup v2: [unable to parse /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64f7518f_8e62_4d48_990c_1955072e5e98.slice/crio-8e390e725041890a275775b17f9c04bfc6c2e8d710c59beeaba4ec822cb47b80/memory.stat: read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64f7518f_8e62_4d48_990c_1955072e5e98.slice/crio-8e390e725041890a275775b17f9c04bfc6c2e8d710c59beeaba4ec822cb47b80/memory.stat: no such device], continuing to push stats Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.612715 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mb5p"] Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613274 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="extract-content" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613292 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="extract-content" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613305 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerName="init" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613311 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerName="init" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613322 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a8325f0-6c0c-4ae0-98b5-be6835297e21" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613329 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a8325f0-6c0c-4ae0-98b5-be6835297e21" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613340 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8afdd3-5682-4977-b883-b58fb1f25857" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613346 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8afdd3-5682-4977-b883-b58fb1f25857" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613359 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51592b04-b9b0-4744-90c9-81b2f01af3ae" containerName="heat-cfnapi" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613365 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="51592b04-b9b0-4744-90c9-81b2f01af3ae" containerName="heat-cfnapi" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613379 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e22a1fb-12bb-4241-bb3c-8a659b96630b" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613385 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e22a1fb-12bb-4241-bb3c-8a659b96630b" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613402 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="extract-utilities" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613408 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="extract-utilities" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613418 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95d5901-0cb0-4e5f-82ab-be2364b11c5e" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613424 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95d5901-0cb0-4e5f-82ab-be2364b11c5e" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613436 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerName="dnsmasq-dns" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613442 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerName="dnsmasq-dns" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613455 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ae1c31-c9da-4ab8-ae06-1d287d556e56" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613461 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ae1c31-c9da-4ab8-ae06-1d287d556e56" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613479 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613485 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613498 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569e06a9-99e0-475e-ac01-d44af882269f" containerName="heat-api" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613506 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="569e06a9-99e0-475e-ac01-d44af882269f" containerName="heat-api" Nov 21 13:57:13 crc kubenswrapper[4904]: E1121 13:57:13.613519 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19585c9c-688b-45f2-bc52-83be15f37165" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613524 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="19585c9c-688b-45f2-bc52-83be15f37165" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613761 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="19585c9c-688b-45f2-bc52-83be15f37165" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613779 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ae1c31-c9da-4ab8-ae06-1d287d556e56" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613791 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="569e06a9-99e0-475e-ac01-d44af882269f" containerName="heat-api" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613797 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95d5901-0cb0-4e5f-82ab-be2364b11c5e" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613808 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8afdd3-5682-4977-b883-b58fb1f25857" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613816 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="51592b04-b9b0-4744-90c9-81b2f01af3ae" containerName="heat-cfnapi" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613828 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a8325f0-6c0c-4ae0-98b5-be6835297e21" containerName="mariadb-database-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613838 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e22a1fb-12bb-4241-bb3c-8a659b96630b" containerName="mariadb-account-create" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613848 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerName="dnsmasq-dns" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.613860 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="713cdb85-aeb5-46c2-9fe0-bed76d06dc9a" containerName="registry-server" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.614975 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.622382 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r2vn8" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.622617 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.622759 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.623271 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mb5p"] Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.674530 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-sb\") pod \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.674711 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-svc\") pod \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.674834 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-swift-storage-0\") pod \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.674943 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-config\") pod \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.674989 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6kdg\" (UniqueName: \"kubernetes.io/projected/0acfb4a5-5122-4f88-885f-7e255f82c2a1-kube-api-access-n6kdg\") pod \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.675012 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-nb\") pod \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\" (UID: \"0acfb4a5-5122-4f88-885f-7e255f82c2a1\") " Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.688745 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0acfb4a5-5122-4f88-885f-7e255f82c2a1-kube-api-access-n6kdg" (OuterVolumeSpecName: "kube-api-access-n6kdg") pod "0acfb4a5-5122-4f88-885f-7e255f82c2a1" (UID: "0acfb4a5-5122-4f88-885f-7e255f82c2a1"). InnerVolumeSpecName "kube-api-access-n6kdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.735386 4904 generic.go:334] "Generic (PLEG): container finished" podID="64f7518f-8e62-4d48-990c-1955072e5e98" containerID="5f74120aee934e9c71b803ec0b7c39d054a2f274d89519e725064c2971707325" exitCode=0 Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.735477 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69db589dbd-54tkv" event={"ID":"64f7518f-8e62-4d48-990c-1955072e5e98","Type":"ContainerDied","Data":"5f74120aee934e9c71b803ec0b7c39d054a2f274d89519e725064c2971707325"} Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.749093 4904 generic.go:334] "Generic (PLEG): container finished" podID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" containerID="361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97" exitCode=0 Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.749622 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4bgqc" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" containerID="cri-o://55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8" gracePeriod=2 Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.750059 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.750672 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" event={"ID":"0acfb4a5-5122-4f88-885f-7e255f82c2a1","Type":"ContainerDied","Data":"361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97"} Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.750933 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-rjx2d" event={"ID":"0acfb4a5-5122-4f88-885f-7e255f82c2a1","Type":"ContainerDied","Data":"40ab0a48b81191c3aefa0a279442a76765866bb4fcd3f4695f14de40df64a363"} Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.750957 4904 scope.go:117] "RemoveContainer" containerID="361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.758469 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0acfb4a5-5122-4f88-885f-7e255f82c2a1" (UID: "0acfb4a5-5122-4f88-885f-7e255f82c2a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.769154 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0acfb4a5-5122-4f88-885f-7e255f82c2a1" (UID: "0acfb4a5-5122-4f88-885f-7e255f82c2a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.770196 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-config" (OuterVolumeSpecName: "config") pod "0acfb4a5-5122-4f88-885f-7e255f82c2a1" (UID: "0acfb4a5-5122-4f88-885f-7e255f82c2a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778165 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-scripts\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778215 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778318 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pp2d\" (UniqueName: \"kubernetes.io/projected/524ce65b-9914-4643-8132-50ee21805a8c-kube-api-access-5pp2d\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778342 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-config-data\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778438 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778451 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6kdg\" (UniqueName: \"kubernetes.io/projected/0acfb4a5-5122-4f88-885f-7e255f82c2a1-kube-api-access-n6kdg\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778461 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.778471 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.813679 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0acfb4a5-5122-4f88-885f-7e255f82c2a1" (UID: "0acfb4a5-5122-4f88-885f-7e255f82c2a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.840320 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0acfb4a5-5122-4f88-885f-7e255f82c2a1" (UID: "0acfb4a5-5122-4f88-885f-7e255f82c2a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.850382 4904 scope.go:117] "RemoveContainer" containerID="ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.885169 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-scripts\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.885228 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.885317 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pp2d\" (UniqueName: \"kubernetes.io/projected/524ce65b-9914-4643-8132-50ee21805a8c-kube-api-access-5pp2d\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.885336 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-config-data\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.885447 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.885458 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0acfb4a5-5122-4f88-885f-7e255f82c2a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.892768 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-config-data\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.895508 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.907914 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-scripts\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.917360 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pp2d\" (UniqueName: \"kubernetes.io/projected/524ce65b-9914-4643-8132-50ee21805a8c-kube-api-access-5pp2d\") pod \"nova-cell0-conductor-db-sync-5mb5p\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:13 crc kubenswrapper[4904]: I1121 13:57:13.964202 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.070232 4904 scope.go:117] "RemoveContainer" containerID="361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97" Nov 21 13:57:14 crc kubenswrapper[4904]: E1121 13:57:14.071046 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97\": container with ID starting with 361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97 not found: ID does not exist" containerID="361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.071109 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97"} err="failed to get container status \"361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97\": rpc error: code = NotFound desc = could not find container \"361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97\": container with ID starting with 361fd4258f5aeb1e4ae88e18dbf1edce8ecfada352f756fe6e66974f685d8c97 not found: ID does not exist" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.071147 4904 scope.go:117] "RemoveContainer" containerID="ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca" Nov 21 13:57:14 crc kubenswrapper[4904]: E1121 13:57:14.083915 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca\": container with ID starting with ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca not found: ID does not exist" containerID="ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.083962 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca"} err="failed to get container status \"ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca\": rpc error: code = NotFound desc = could not find container \"ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca\": container with ID starting with ae19a70a46afbd0ac440f49a851f61148a653ab585b0e7781efc3e753a1ca5ca not found: ID does not exist" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.177798 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-rjx2d"] Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.178958 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.196247 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-rjx2d"] Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.305509 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-combined-ca-bundle\") pod \"64f7518f-8e62-4d48-990c-1955072e5e98\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.306203 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2j6k\" (UniqueName: \"kubernetes.io/projected/64f7518f-8e62-4d48-990c-1955072e5e98-kube-api-access-k2j6k\") pod \"64f7518f-8e62-4d48-990c-1955072e5e98\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.306260 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data-custom\") pod \"64f7518f-8e62-4d48-990c-1955072e5e98\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.306446 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f7518f-8e62-4d48-990c-1955072e5e98-logs\") pod \"64f7518f-8e62-4d48-990c-1955072e5e98\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.306556 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data\") pod \"64f7518f-8e62-4d48-990c-1955072e5e98\" (UID: \"64f7518f-8e62-4d48-990c-1955072e5e98\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.325006 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f7518f-8e62-4d48-990c-1955072e5e98-kube-api-access-k2j6k" (OuterVolumeSpecName: "kube-api-access-k2j6k") pod "64f7518f-8e62-4d48-990c-1955072e5e98" (UID: "64f7518f-8e62-4d48-990c-1955072e5e98"). InnerVolumeSpecName "kube-api-access-k2j6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.328565 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64f7518f-8e62-4d48-990c-1955072e5e98-logs" (OuterVolumeSpecName: "logs") pod "64f7518f-8e62-4d48-990c-1955072e5e98" (UID: "64f7518f-8e62-4d48-990c-1955072e5e98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.339868 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64f7518f-8e62-4d48-990c-1955072e5e98" (UID: "64f7518f-8e62-4d48-990c-1955072e5e98"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.447978 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64f7518f-8e62-4d48-990c-1955072e5e98-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.448254 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2j6k\" (UniqueName: \"kubernetes.io/projected/64f7518f-8e62-4d48-990c-1955072e5e98-kube-api-access-k2j6k\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.448337 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.467786 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data" (OuterVolumeSpecName: "config-data") pod "64f7518f-8e62-4d48-990c-1955072e5e98" (UID: "64f7518f-8e62-4d48-990c-1955072e5e98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.495794 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64f7518f-8e62-4d48-990c-1955072e5e98" (UID: "64f7518f-8e62-4d48-990c-1955072e5e98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.542148 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0acfb4a5-5122-4f88-885f-7e255f82c2a1" path="/var/lib/kubelet/pods/0acfb4a5-5122-4f88-885f-7e255f82c2a1/volumes" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.553440 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.553482 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64f7518f-8e62-4d48-990c-1955072e5e98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.589064 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.759038 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-utilities\") pod \"eb8abf79-d309-4440-9815-0ebcb93b9312\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.759126 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2pbz\" (UniqueName: \"kubernetes.io/projected/eb8abf79-d309-4440-9815-0ebcb93b9312-kube-api-access-r2pbz\") pod \"eb8abf79-d309-4440-9815-0ebcb93b9312\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.759290 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-catalog-content\") pod \"eb8abf79-d309-4440-9815-0ebcb93b9312\" (UID: \"eb8abf79-d309-4440-9815-0ebcb93b9312\") " Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.760512 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-utilities" (OuterVolumeSpecName: "utilities") pod "eb8abf79-d309-4440-9815-0ebcb93b9312" (UID: "eb8abf79-d309-4440-9815-0ebcb93b9312"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.761455 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mb5p"] Nov 21 13:57:14 crc kubenswrapper[4904]: W1121 13:57:14.766367 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod524ce65b_9914_4643_8132_50ee21805a8c.slice/crio-0675c134f402a4ba14c5c2517e87259158c98af68142eb48ed60612f0d2a74ba WatchSource:0}: Error finding container 0675c134f402a4ba14c5c2517e87259158c98af68142eb48ed60612f0d2a74ba: Status 404 returned error can't find the container with id 0675c134f402a4ba14c5c2517e87259158c98af68142eb48ed60612f0d2a74ba Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.766614 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerID="55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8" exitCode=0 Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.768739 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4bgqc" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.769220 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerDied","Data":"55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8"} Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.769268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4bgqc" event={"ID":"eb8abf79-d309-4440-9815-0ebcb93b9312","Type":"ContainerDied","Data":"5b6aec42214736a6eaf8b679dd83dffcc20d8dbc49cc8d2ff4efc146a555037a"} Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.769290 4904 scope.go:117] "RemoveContainer" containerID="55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.776490 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8abf79-d309-4440-9815-0ebcb93b9312-kube-api-access-r2pbz" (OuterVolumeSpecName: "kube-api-access-r2pbz") pod "eb8abf79-d309-4440-9815-0ebcb93b9312" (UID: "eb8abf79-d309-4440-9815-0ebcb93b9312"). InnerVolumeSpecName "kube-api-access-r2pbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.803515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69db589dbd-54tkv" event={"ID":"64f7518f-8e62-4d48-990c-1955072e5e98","Type":"ContainerDied","Data":"8e390e725041890a275775b17f9c04bfc6c2e8d710c59beeaba4ec822cb47b80"} Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.803967 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69db589dbd-54tkv" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.857956 4904 generic.go:334] "Generic (PLEG): container finished" podID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerID="82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591" exitCode=0 Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.858034 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5788166-ebcd-434c-8684-3b2a5bbed6df","Type":"ContainerDied","Data":"82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591"} Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.862302 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb8abf79-d309-4440-9815-0ebcb93b9312" (UID: "eb8abf79-d309-4440-9815-0ebcb93b9312"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.866570 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.866595 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2pbz\" (UniqueName: \"kubernetes.io/projected/eb8abf79-d309-4440-9815-0ebcb93b9312-kube-api-access-r2pbz\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.866610 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8abf79-d309-4440-9815-0ebcb93b9312-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.869987 4904 scope.go:117] "RemoveContainer" containerID="7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd" Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.879619 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-69db589dbd-54tkv"] Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.891321 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-69db589dbd-54tkv"] Nov 21 13:57:14 crc kubenswrapper[4904]: I1121 13:57:14.991973 4904 scope.go:117] "RemoveContainer" containerID="428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.060587 4904 scope.go:117] "RemoveContainer" containerID="55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8" Nov 21 13:57:15 crc kubenswrapper[4904]: E1121 13:57:15.061352 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8\": container with ID starting with 55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8 not found: ID does not exist" containerID="55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.061390 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8"} err="failed to get container status \"55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8\": rpc error: code = NotFound desc = could not find container \"55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8\": container with ID starting with 55fbc1f1295fb8f86c6c6cfa4c1d67226f28d545054149f9aa11cbe0f52427b8 not found: ID does not exist" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.061414 4904 scope.go:117] "RemoveContainer" containerID="7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd" Nov 21 13:57:15 crc kubenswrapper[4904]: E1121 13:57:15.062052 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd\": container with ID starting with 7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd not found: ID does not exist" containerID="7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.062084 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd"} err="failed to get container status \"7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd\": rpc error: code = NotFound desc = could not find container \"7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd\": container with ID starting with 7aeec7bc1f0d94de42990e88944ce98a230b02a1d838cb9be3cf9151f5c6e8bd not found: ID does not exist" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.062102 4904 scope.go:117] "RemoveContainer" containerID="428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302" Nov 21 13:57:15 crc kubenswrapper[4904]: E1121 13:57:15.064065 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302\": container with ID starting with 428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302 not found: ID does not exist" containerID="428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.064126 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302"} err="failed to get container status \"428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302\": rpc error: code = NotFound desc = could not find container \"428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302\": container with ID starting with 428cb1169cdce5ec3ae2d04203ba7be166cf9c046c70c9d4b3d0ab8a72eea302 not found: ID does not exist" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.064146 4904 scope.go:117] "RemoveContainer" containerID="5f74120aee934e9c71b803ec0b7c39d054a2f274d89519e725064c2971707325" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.120560 4904 scope.go:117] "RemoveContainer" containerID="a84c619f0ce01e59cef47d5478b7c57bc6cd6b9594e08f81bd6b31cd96f87dd0" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.131190 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4bgqc"] Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.153884 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4bgqc"] Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.387442 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.471897 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f5858899-skqzp"] Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.565978 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.641646 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-667fdcb7f-d7lkr"] Nov 21 13:57:15 crc kubenswrapper[4904]: I1121 13:57:15.885238 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" event={"ID":"524ce65b-9914-4643-8132-50ee21805a8c","Type":"ContainerStarted","Data":"0675c134f402a4ba14c5c2517e87259158c98af68142eb48ed60612f0d2a74ba"} Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.143925 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.268700 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.324561 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw88p\" (UniqueName: \"kubernetes.io/projected/1eb6c970-c61d-48ee-9647-cde886a67028-kube-api-access-cw88p\") pod \"1eb6c970-c61d-48ee-9647-cde886a67028\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.324688 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data-custom\") pod \"1eb6c970-c61d-48ee-9647-cde886a67028\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.324709 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-combined-ca-bundle\") pod \"1eb6c970-c61d-48ee-9647-cde886a67028\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.324758 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data\") pod \"1eb6c970-c61d-48ee-9647-cde886a67028\" (UID: \"1eb6c970-c61d-48ee-9647-cde886a67028\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.335347 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb6c970-c61d-48ee-9647-cde886a67028-kube-api-access-cw88p" (OuterVolumeSpecName: "kube-api-access-cw88p") pod "1eb6c970-c61d-48ee-9647-cde886a67028" (UID: "1eb6c970-c61d-48ee-9647-cde886a67028"). InnerVolumeSpecName "kube-api-access-cw88p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.355542 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1eb6c970-c61d-48ee-9647-cde886a67028" (UID: "1eb6c970-c61d-48ee-9647-cde886a67028"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.384666 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eb6c970-c61d-48ee-9647-cde886a67028" (UID: "1eb6c970-c61d-48ee-9647-cde886a67028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.426861 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-combined-ca-bundle\") pod \"52194ea9-32ec-4f75-9365-37e63017872a\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.427028 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfch\" (UniqueName: \"kubernetes.io/projected/52194ea9-32ec-4f75-9365-37e63017872a-kube-api-access-5qfch\") pod \"52194ea9-32ec-4f75-9365-37e63017872a\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.427104 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data\") pod \"52194ea9-32ec-4f75-9365-37e63017872a\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.427163 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data-custom\") pod \"52194ea9-32ec-4f75-9365-37e63017872a\" (UID: \"52194ea9-32ec-4f75-9365-37e63017872a\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.428341 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw88p\" (UniqueName: \"kubernetes.io/projected/1eb6c970-c61d-48ee-9647-cde886a67028-kube-api-access-cw88p\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.428362 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.428373 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.429959 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data" (OuterVolumeSpecName: "config-data") pod "1eb6c970-c61d-48ee-9647-cde886a67028" (UID: "1eb6c970-c61d-48ee-9647-cde886a67028"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.431104 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52194ea9-32ec-4f75-9365-37e63017872a-kube-api-access-5qfch" (OuterVolumeSpecName: "kube-api-access-5qfch") pod "52194ea9-32ec-4f75-9365-37e63017872a" (UID: "52194ea9-32ec-4f75-9365-37e63017872a"). InnerVolumeSpecName "kube-api-access-5qfch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.431500 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "52194ea9-32ec-4f75-9365-37e63017872a" (UID: "52194ea9-32ec-4f75-9365-37e63017872a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.454694 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.470282 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52194ea9-32ec-4f75-9365-37e63017872a" (UID: "52194ea9-32ec-4f75-9365-37e63017872a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.515446 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data" (OuterVolumeSpecName: "config-data") pod "52194ea9-32ec-4f75-9365-37e63017872a" (UID: "52194ea9-32ec-4f75-9365-37e63017872a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.530756 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.530791 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb6c970-c61d-48ee-9647-cde886a67028-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.530802 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfch\" (UniqueName: \"kubernetes.io/projected/52194ea9-32ec-4f75-9365-37e63017872a-kube-api-access-5qfch\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.530816 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.530828 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52194ea9-32ec-4f75-9365-37e63017872a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.537160 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" path="/var/lib/kubelet/pods/64f7518f-8e62-4d48-990c-1955072e5e98/volumes" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.537910 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" path="/var/lib/kubelet/pods/eb8abf79-d309-4440-9815-0ebcb93b9312/volumes" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.632112 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data\") pod \"a5788166-ebcd-434c-8684-3b2a5bbed6df\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.632191 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-scripts\") pod \"a5788166-ebcd-434c-8684-3b2a5bbed6df\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.632231 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-combined-ca-bundle\") pod \"a5788166-ebcd-434c-8684-3b2a5bbed6df\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.632377 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bnqk\" (UniqueName: \"kubernetes.io/projected/a5788166-ebcd-434c-8684-3b2a5bbed6df-kube-api-access-2bnqk\") pod \"a5788166-ebcd-434c-8684-3b2a5bbed6df\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.632449 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data-custom\") pod \"a5788166-ebcd-434c-8684-3b2a5bbed6df\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.632539 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5788166-ebcd-434c-8684-3b2a5bbed6df-etc-machine-id\") pod \"a5788166-ebcd-434c-8684-3b2a5bbed6df\" (UID: \"a5788166-ebcd-434c-8684-3b2a5bbed6df\") " Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.633120 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5788166-ebcd-434c-8684-3b2a5bbed6df-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a5788166-ebcd-434c-8684-3b2a5bbed6df" (UID: "a5788166-ebcd-434c-8684-3b2a5bbed6df"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.633352 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a5788166-ebcd-434c-8684-3b2a5bbed6df-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.637681 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-scripts" (OuterVolumeSpecName: "scripts") pod "a5788166-ebcd-434c-8684-3b2a5bbed6df" (UID: "a5788166-ebcd-434c-8684-3b2a5bbed6df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.642836 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5788166-ebcd-434c-8684-3b2a5bbed6df-kube-api-access-2bnqk" (OuterVolumeSpecName: "kube-api-access-2bnqk") pod "a5788166-ebcd-434c-8684-3b2a5bbed6df" (UID: "a5788166-ebcd-434c-8684-3b2a5bbed6df"). InnerVolumeSpecName "kube-api-access-2bnqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.643893 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a5788166-ebcd-434c-8684-3b2a5bbed6df" (UID: "a5788166-ebcd-434c-8684-3b2a5bbed6df"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.718034 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5788166-ebcd-434c-8684-3b2a5bbed6df" (UID: "a5788166-ebcd-434c-8684-3b2a5bbed6df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.737804 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.737846 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.737858 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bnqk\" (UniqueName: \"kubernetes.io/projected/a5788166-ebcd-434c-8684-3b2a5bbed6df-kube-api-access-2bnqk\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.737868 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.747482 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data" (OuterVolumeSpecName: "config-data") pod "a5788166-ebcd-434c-8684-3b2a5bbed6df" (UID: "a5788166-ebcd-434c-8684-3b2a5bbed6df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.840809 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5788166-ebcd-434c-8684-3b2a5bbed6df-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.909499 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f5858899-skqzp" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.909520 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f5858899-skqzp" event={"ID":"1eb6c970-c61d-48ee-9647-cde886a67028","Type":"ContainerDied","Data":"e578bf97dc5430ed819197b48ee6b80ed77c1ccad62a1332c916151a81f261b0"} Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.909618 4904 scope.go:117] "RemoveContainer" containerID="8808d894d1f3465ac36d91eff08b453c5c43038306177b9567623bb89202274a" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.917190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" event={"ID":"52194ea9-32ec-4f75-9365-37e63017872a","Type":"ContainerDied","Data":"83bea8979f6a8bd56b2296b1465377107573316f1c61792ca78722fac62a9fd8"} Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.917737 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-667fdcb7f-d7lkr" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.930217 4904 generic.go:334] "Generic (PLEG): container finished" podID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerID="9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170" exitCode=0 Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.930287 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5788166-ebcd-434c-8684-3b2a5bbed6df","Type":"ContainerDied","Data":"9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170"} Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.930320 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a5788166-ebcd-434c-8684-3b2a5bbed6df","Type":"ContainerDied","Data":"5f65491b313874e534b2b63b510f956e98037fd2ab35713bf0204c8ca3d4a9f6"} Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.930337 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.938268 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.941718 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f5858899-skqzp"] Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.952303 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f5858899-skqzp"] Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.965835 4904 scope.go:117] "RemoveContainer" containerID="3b74ca00bad526db62c2c3e2e3784bc99a83e60a44a9ad1ed0a92785a6cdb35c" Nov 21 13:57:16 crc kubenswrapper[4904]: I1121 13:57:16.975757 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-667fdcb7f-d7lkr"] Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:16.995162 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-667fdcb7f-d7lkr"] Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.097300 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.100604 4904 scope.go:117] "RemoveContainer" containerID="82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.116689 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.137895 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138710 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52194ea9-32ec-4f75-9365-37e63017872a" containerName="heat-cfnapi" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138730 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="52194ea9-32ec-4f75-9365-37e63017872a" containerName="heat-cfnapi" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138746 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="extract-content" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138753 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="extract-content" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138767 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138774 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138785 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="extract-utilities" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138791 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="extract-utilities" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138812 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" containerName="heat-api" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138820 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" containerName="heat-api" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138832 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138838 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138849 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" containerName="heat-api" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138858 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" containerName="heat-api" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138872 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138878 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138890 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="cinder-scheduler" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138896 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="cinder-scheduler" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.138909 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="probe" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.138915 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="probe" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139120 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="probe" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139132 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="52194ea9-32ec-4f75-9365-37e63017872a" containerName="heat-cfnapi" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139171 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" containerName="cinder-scheduler" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139186 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139196 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f7518f-8e62-4d48-990c-1955072e5e98" containerName="barbican-api-log" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139205 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="52194ea9-32ec-4f75-9365-37e63017872a" containerName="heat-cfnapi" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139214 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" containerName="heat-api" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139223 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8abf79-d309-4440-9815-0ebcb93b9312" containerName="registry-server" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.139422 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52194ea9-32ec-4f75-9365-37e63017872a" containerName="heat-cfnapi" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139431 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="52194ea9-32ec-4f75-9365-37e63017872a" containerName="heat-cfnapi" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.139608 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" containerName="heat-api" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.140481 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.143273 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.154541 4904 scope.go:117] "RemoveContainer" containerID="9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.198103 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.216581 4904 scope.go:117] "RemoveContainer" containerID="82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.217343 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591\": container with ID starting with 82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591 not found: ID does not exist" containerID="82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.217381 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591"} err="failed to get container status \"82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591\": rpc error: code = NotFound desc = could not find container \"82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591\": container with ID starting with 82ae2302c1a7544c415b284df4bc59f54866c85fd4e12a656d98a05788386591 not found: ID does not exist" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.217414 4904 scope.go:117] "RemoveContainer" containerID="9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170" Nov 21 13:57:17 crc kubenswrapper[4904]: E1121 13:57:17.217628 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170\": container with ID starting with 9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170 not found: ID does not exist" containerID="9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.217646 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170"} err="failed to get container status \"9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170\": rpc error: code = NotFound desc = could not find container \"9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170\": container with ID starting with 9ec3201ffe95e75a89d81b4fdc9a71172956f999fe080fcba4ee347678771170 not found: ID does not exist" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.260308 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.260510 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-scripts\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.260544 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b6933e2-83af-4bab-b20b-498a29cc1a68-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.260599 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpdb\" (UniqueName: \"kubernetes.io/projected/2b6933e2-83af-4bab-b20b-498a29cc1a68-kube-api-access-qzpdb\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.260931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-config-data\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.260983 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.364164 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-config-data\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.364245 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.364331 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.364492 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-scripts\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.364511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b6933e2-83af-4bab-b20b-498a29cc1a68-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.364574 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzpdb\" (UniqueName: \"kubernetes.io/projected/2b6933e2-83af-4bab-b20b-498a29cc1a68-kube-api-access-qzpdb\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.365112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2b6933e2-83af-4bab-b20b-498a29cc1a68-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.370524 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-scripts\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.370608 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-config-data\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.384337 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.390345 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b6933e2-83af-4bab-b20b-498a29cc1a68-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.396272 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzpdb\" (UniqueName: \"kubernetes.io/projected/2b6933e2-83af-4bab-b20b-498a29cc1a68-kube-api-access-qzpdb\") pod \"cinder-scheduler-0\" (UID: \"2b6933e2-83af-4bab-b20b-498a29cc1a68\") " pod="openstack/cinder-scheduler-0" Nov 21 13:57:17 crc kubenswrapper[4904]: I1121 13:57:17.458630 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 21 13:57:18 crc kubenswrapper[4904]: I1121 13:57:18.033815 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 21 13:57:18 crc kubenswrapper[4904]: W1121 13:57:18.045414 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b6933e2_83af_4bab_b20b_498a29cc1a68.slice/crio-96888569c65900a8cab76639c9df8170a969da3d0dfc1cc219db3a7542841bea WatchSource:0}: Error finding container 96888569c65900a8cab76639c9df8170a969da3d0dfc1cc219db3a7542841bea: Status 404 returned error can't find the container with id 96888569c65900a8cab76639c9df8170a969da3d0dfc1cc219db3a7542841bea Nov 21 13:57:18 crc kubenswrapper[4904]: I1121 13:57:18.543199 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb6c970-c61d-48ee-9647-cde886a67028" path="/var/lib/kubelet/pods/1eb6c970-c61d-48ee-9647-cde886a67028/volumes" Nov 21 13:57:18 crc kubenswrapper[4904]: I1121 13:57:18.544285 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52194ea9-32ec-4f75-9365-37e63017872a" path="/var/lib/kubelet/pods/52194ea9-32ec-4f75-9365-37e63017872a/volumes" Nov 21 13:57:18 crc kubenswrapper[4904]: I1121 13:57:18.544925 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5788166-ebcd-434c-8684-3b2a5bbed6df" path="/var/lib/kubelet/pods/a5788166-ebcd-434c-8684-3b2a5bbed6df/volumes" Nov 21 13:57:18 crc kubenswrapper[4904]: I1121 13:57:18.987514 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b6933e2-83af-4bab-b20b-498a29cc1a68","Type":"ContainerStarted","Data":"4a95ed3d77723fcc49b76e814814d7940ef33595cf89a9b91af9d6b9c1b1a0f3"} Nov 21 13:57:18 crc kubenswrapper[4904]: I1121 13:57:18.988279 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b6933e2-83af-4bab-b20b-498a29cc1a68","Type":"ContainerStarted","Data":"96888569c65900a8cab76639c9df8170a969da3d0dfc1cc219db3a7542841bea"} Nov 21 13:57:20 crc kubenswrapper[4904]: I1121 13:57:20.002952 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2b6933e2-83af-4bab-b20b-498a29cc1a68","Type":"ContainerStarted","Data":"35b150f25c6aa96e925629e46d03dca8ad7a8ce5cafdfe1dfde04c21c4d68a79"} Nov 21 13:57:20 crc kubenswrapper[4904]: I1121 13:57:20.034538 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.034515966 podStartE2EDuration="3.034515966s" podCreationTimestamp="2025-11-21 13:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:20.02974192 +0000 UTC m=+1514.151274472" watchObservedRunningTime="2025-11-21 13:57:20.034515966 +0000 UTC m=+1514.156048528" Nov 21 13:57:22 crc kubenswrapper[4904]: I1121 13:57:22.459479 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 21 13:57:23 crc kubenswrapper[4904]: I1121 13:57:22.999738 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 13:57:25 crc kubenswrapper[4904]: I1121 13:57:25.926090 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 13:57:25 crc kubenswrapper[4904]: I1121 13:57:25.983536 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-65d45d497f-rtxs5"] Nov 21 13:57:25 crc kubenswrapper[4904]: I1121 13:57:25.983817 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-65d45d497f-rtxs5" podUID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" containerName="heat-engine" containerID="cri-o://70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" gracePeriod=60 Nov 21 13:57:27 crc kubenswrapper[4904]: I1121 13:57:27.795476 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 21 13:57:27 crc kubenswrapper[4904]: I1121 13:57:27.966942 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.200:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.113286 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.113353 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.113409 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.114754 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.114823 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" gracePeriod=600 Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.135089 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" event={"ID":"524ce65b-9914-4643-8132-50ee21805a8c","Type":"ContainerStarted","Data":"bc611156b6ae74e4b66bae9902dc7fd4f5bee5207383c65327f9866971fb5434"} Nov 21 13:57:28 crc kubenswrapper[4904]: I1121 13:57:28.157525 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" podStartSLOduration=2.573804292 podStartE2EDuration="15.157500943s" podCreationTimestamp="2025-11-21 13:57:13 +0000 UTC" firstStartedPulling="2025-11-21 13:57:14.775281057 +0000 UTC m=+1508.896813609" lastFinishedPulling="2025-11-21 13:57:27.358977708 +0000 UTC m=+1521.480510260" observedRunningTime="2025-11-21 13:57:28.153464245 +0000 UTC m=+1522.274996817" watchObservedRunningTime="2025-11-21 13:57:28.157500943 +0000 UTC m=+1522.279033495" Nov 21 13:57:28 crc kubenswrapper[4904]: E1121 13:57:28.247812 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:57:29 crc kubenswrapper[4904]: I1121 13:57:29.148796 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" exitCode=0 Nov 21 13:57:29 crc kubenswrapper[4904]: I1121 13:57:29.148873 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991"} Nov 21 13:57:29 crc kubenswrapper[4904]: I1121 13:57:29.149327 4904 scope.go:117] "RemoveContainer" containerID="dcc2d7cbaa7ef87e0c42834de16075a7ebf9ca0b1a68c156ba86b82f49b3f653" Nov 21 13:57:29 crc kubenswrapper[4904]: I1121 13:57:29.150148 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:57:29 crc kubenswrapper[4904]: E1121 13:57:29.150438 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:57:33 crc kubenswrapper[4904]: I1121 13:57:33.384748 4904 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod713cdb85-aeb5-46c2-9fe0-bed76d06dc9a"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod713cdb85-aeb5-46c2-9fe0-bed76d06dc9a] : Timed out while waiting for systemd to remove kubepods-burstable-pod713cdb85_aeb5_46c2_9fe0_bed76d06dc9a.slice" Nov 21 13:57:35 crc kubenswrapper[4904]: E1121 13:57:35.399554 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247 is running failed: container process not found" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 13:57:35 crc kubenswrapper[4904]: E1121 13:57:35.400579 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247 is running failed: container process not found" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 13:57:35 crc kubenswrapper[4904]: E1121 13:57:35.401006 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247 is running failed: container process not found" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 13:57:35 crc kubenswrapper[4904]: E1121 13:57:35.401046 4904 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-65d45d497f-rtxs5" podUID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" containerName="heat-engine" Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.786500 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.847324 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.913586 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data\") pod \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.913726 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-combined-ca-bundle\") pod \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.913986 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvzr6\" (UniqueName: \"kubernetes.io/projected/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-kube-api-access-kvzr6\") pod \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.914133 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data-custom\") pod \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\" (UID: \"3c42062c-c3df-4382-8bd5-0b9bb6cb711a\") " Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.921929 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-kube-api-access-kvzr6" (OuterVolumeSpecName: "kube-api-access-kvzr6") pod "3c42062c-c3df-4382-8bd5-0b9bb6cb711a" (UID: "3c42062c-c3df-4382-8bd5-0b9bb6cb711a"). InnerVolumeSpecName "kube-api-access-kvzr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.925971 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c42062c-c3df-4382-8bd5-0b9bb6cb711a" (UID: "3c42062c-c3df-4382-8bd5-0b9bb6cb711a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.978873 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c42062c-c3df-4382-8bd5-0b9bb6cb711a" (UID: "3c42062c-c3df-4382-8bd5-0b9bb6cb711a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:35 crc kubenswrapper[4904]: I1121 13:57:35.990632 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data" (OuterVolumeSpecName: "config-data") pod "3c42062c-c3df-4382-8bd5-0b9bb6cb711a" (UID: "3c42062c-c3df-4382-8bd5-0b9bb6cb711a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.016750 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9trl\" (UniqueName: \"kubernetes.io/projected/4a33906f-29de-4dea-bd13-a149e36b146c-kube-api-access-h9trl\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.016844 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a33906f-29de-4dea-bd13-a149e36b146c-etc-machine-id\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.016994 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data-custom\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017034 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-scripts\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017061 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a33906f-29de-4dea-bd13-a149e36b146c-logs\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017172 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-combined-ca-bundle\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017270 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data\") pod \"4a33906f-29de-4dea-bd13-a149e36b146c\" (UID: \"4a33906f-29de-4dea-bd13-a149e36b146c\") " Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017742 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017762 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017774 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.017784 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvzr6\" (UniqueName: \"kubernetes.io/projected/3c42062c-c3df-4382-8bd5-0b9bb6cb711a-kube-api-access-kvzr6\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.018008 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a33906f-29de-4dea-bd13-a149e36b146c-logs" (OuterVolumeSpecName: "logs") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.018017 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a33906f-29de-4dea-bd13-a149e36b146c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.022178 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-scripts" (OuterVolumeSpecName: "scripts") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.023790 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a33906f-29de-4dea-bd13-a149e36b146c-kube-api-access-h9trl" (OuterVolumeSpecName: "kube-api-access-h9trl") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "kube-api-access-h9trl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.023798 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.050643 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.079878 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data" (OuterVolumeSpecName: "config-data") pod "4a33906f-29de-4dea-bd13-a149e36b146c" (UID: "4a33906f-29de-4dea-bd13-a149e36b146c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120370 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120413 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120425 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a33906f-29de-4dea-bd13-a149e36b146c-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120434 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120442 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a33906f-29de-4dea-bd13-a149e36b146c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120456 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9trl\" (UniqueName: \"kubernetes.io/projected/4a33906f-29de-4dea-bd13-a149e36b146c-kube-api-access-h9trl\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.120470 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4a33906f-29de-4dea-bd13-a149e36b146c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.235441 4904 generic.go:334] "Generic (PLEG): container finished" podID="4a33906f-29de-4dea-bd13-a149e36b146c" containerID="24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e" exitCode=137 Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.235532 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a33906f-29de-4dea-bd13-a149e36b146c","Type":"ContainerDied","Data":"24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e"} Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.235573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"4a33906f-29de-4dea-bd13-a149e36b146c","Type":"ContainerDied","Data":"bf049b8f6afacb0a59a5a8110bf31b671e23836bd7925315592f65dd0cf38520"} Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.235598 4904 scope.go:117] "RemoveContainer" containerID="24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.235837 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.238975 4904 generic.go:334] "Generic (PLEG): container finished" podID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" exitCode=0 Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.239019 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-65d45d497f-rtxs5" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.239037 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65d45d497f-rtxs5" event={"ID":"3c42062c-c3df-4382-8bd5-0b9bb6cb711a","Type":"ContainerDied","Data":"70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247"} Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.239073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-65d45d497f-rtxs5" event={"ID":"3c42062c-c3df-4382-8bd5-0b9bb6cb711a","Type":"ContainerDied","Data":"edd81a224de63ff330684297fce1cf81e5fbf28cbd26e5502fe1823bd531513b"} Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.268732 4904 scope.go:117] "RemoveContainer" containerID="52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.286437 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-65d45d497f-rtxs5"] Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.304353 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-65d45d497f-rtxs5"] Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.311327 4904 scope.go:117] "RemoveContainer" containerID="24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e" Nov 21 13:57:36 crc kubenswrapper[4904]: E1121 13:57:36.312045 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e\": container with ID starting with 24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e not found: ID does not exist" containerID="24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.312190 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e"} err="failed to get container status \"24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e\": rpc error: code = NotFound desc = could not find container \"24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e\": container with ID starting with 24fabcc280e5829c35f180da71341f5fde3811fe869c1e1c2dcba8ad21fd616e not found: ID does not exist" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.312314 4904 scope.go:117] "RemoveContainer" containerID="52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.317281 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:57:36 crc kubenswrapper[4904]: E1121 13:57:36.321922 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4\": container with ID starting with 52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4 not found: ID does not exist" containerID="52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.326238 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4"} err="failed to get container status \"52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4\": rpc error: code = NotFound desc = could not find container \"52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4\": container with ID starting with 52b08312b9f8da3536b7beb92f2db829cde496faf2ff1c2762d51f2c8aca5fc4 not found: ID does not exist" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.326627 4904 scope.go:117] "RemoveContainer" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.328617 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.348254 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:57:36 crc kubenswrapper[4904]: E1121 13:57:36.353719 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" containerName="heat-engine" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.353756 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" containerName="heat-engine" Nov 21 13:57:36 crc kubenswrapper[4904]: E1121 13:57:36.353834 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.353847 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api" Nov 21 13:57:36 crc kubenswrapper[4904]: E1121 13:57:36.353866 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api-log" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.353873 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api-log" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.354217 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.354240 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" containerName="heat-engine" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.354261 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" containerName="cinder-api-log" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.356436 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.359244 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.359381 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.359486 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.363961 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.396218 4904 scope.go:117] "RemoveContainer" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" Nov 21 13:57:36 crc kubenswrapper[4904]: E1121 13:57:36.397090 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247\": container with ID starting with 70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247 not found: ID does not exist" containerID="70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.397155 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247"} err="failed to get container status \"70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247\": rpc error: code = NotFound desc = could not find container \"70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247\": container with ID starting with 70144617842c2f4e5e8266dc7b4a84112e9a1b7955c4aa1c8f2563d70052f247 not found: ID does not exist" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.529541 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c42062c-c3df-4382-8bd5-0b9bb6cb711a" path="/var/lib/kubelet/pods/3c42062c-c3df-4382-8bd5-0b9bb6cb711a/volumes" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.531250 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a33906f-29de-4dea-bd13-a149e36b146c" path="/var/lib/kubelet/pods/4a33906f-29de-4dea-bd13-a149e36b146c/volumes" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.531961 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532030 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-scripts\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532089 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-config-data\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532359 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532454 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-logs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532490 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cg7\" (UniqueName: \"kubernetes.io/projected/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-kube-api-access-66cg7\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532612 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.532648 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635450 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-logs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635523 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cg7\" (UniqueName: \"kubernetes.io/projected/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-kube-api-access-66cg7\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635657 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635727 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635818 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635865 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-scripts\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.635960 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-config-data\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.636015 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.636065 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.636099 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.636102 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-logs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.642576 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-config-data-custom\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.642606 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-public-tls-certs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.643318 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.643551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-config-data\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.647385 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-scripts\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.652041 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.665450 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cg7\" (UniqueName: \"kubernetes.io/projected/fd487d38-5efb-41f0-88f9-ba5360b8c3cf-kube-api-access-66cg7\") pod \"cinder-api-0\" (UID: \"fd487d38-5efb-41f0-88f9-ba5360b8c3cf\") " pod="openstack/cinder-api-0" Nov 21 13:57:36 crc kubenswrapper[4904]: I1121 13:57:36.694810 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 21 13:57:37 crc kubenswrapper[4904]: I1121 13:57:37.273620 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 21 13:57:38 crc kubenswrapper[4904]: I1121 13:57:38.276198 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd487d38-5efb-41f0-88f9-ba5360b8c3cf","Type":"ContainerStarted","Data":"7b12c40b1118f3b14ee590e8e8bc0be106157c452b74e99673e9e7a558b21975"} Nov 21 13:57:38 crc kubenswrapper[4904]: I1121 13:57:38.276566 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd487d38-5efb-41f0-88f9-ba5360b8c3cf","Type":"ContainerStarted","Data":"974c43e3f35a93ac0b23aa8437ff324f9d985d33f939edd18c79ae2cf06843ea"} Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.289147 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fd487d38-5efb-41f0-88f9-ba5360b8c3cf","Type":"ContainerStarted","Data":"0d2e055cbd2790736d9ed6006c1ddae5db49d5f3536ad78df7052cf21d2bf356"} Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.289790 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.325538 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.325515874 podStartE2EDuration="3.325515874s" podCreationTimestamp="2025-11-21 13:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:39.310368545 +0000 UTC m=+1533.431901097" watchObservedRunningTime="2025-11-21 13:57:39.325515874 +0000 UTC m=+1533.447048426" Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.821322 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.991564 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-sg-core-conf-yaml\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.991832 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-combined-ca-bundle\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.991862 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-config-data\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.991909 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-run-httpd\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.992062 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-log-httpd\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.992088 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt76t\" (UniqueName: \"kubernetes.io/projected/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-kube-api-access-xt76t\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.992175 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-scripts\") pod \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\" (UID: \"d12ae69a-f1da-42a3-811f-b8c2a95dde5a\") " Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.992751 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:39 crc kubenswrapper[4904]: I1121 13:57:39.993274 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.003057 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-kube-api-access-xt76t" (OuterVolumeSpecName: "kube-api-access-xt76t") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "kube-api-access-xt76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.008457 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-scripts" (OuterVolumeSpecName: "scripts") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.031245 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.094862 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.094904 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.094918 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt76t\" (UniqueName: \"kubernetes.io/projected/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-kube-api-access-xt76t\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.094934 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.094947 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.099217 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.147116 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-config-data" (OuterVolumeSpecName: "config-data") pod "d12ae69a-f1da-42a3-811f-b8c2a95dde5a" (UID: "d12ae69a-f1da-42a3-811f-b8c2a95dde5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.196805 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.196864 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d12ae69a-f1da-42a3-811f-b8c2a95dde5a-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.304054 4904 generic.go:334] "Generic (PLEG): container finished" podID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerID="bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b" exitCode=137 Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.304122 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.304136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerDied","Data":"bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b"} Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.304814 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d12ae69a-f1da-42a3-811f-b8c2a95dde5a","Type":"ContainerDied","Data":"6a777386dfc640c7987472952aa964912025cf28c757758d3276221b8a3ec0c3"} Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.304862 4904 scope.go:117] "RemoveContainer" containerID="bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.335185 4904 scope.go:117] "RemoveContainer" containerID="5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.367573 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.381285 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.391541 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.392190 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-central-agent" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392211 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-central-agent" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.392239 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="sg-core" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392249 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="sg-core" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.392269 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-notification-agent" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392277 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-notification-agent" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.392308 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="proxy-httpd" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392329 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="proxy-httpd" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392601 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="proxy-httpd" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392631 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="sg-core" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392645 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-central-agent" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.392698 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" containerName="ceilometer-notification-agent" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.398040 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.405358 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.407536 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.413288 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.446007 4904 scope.go:117] "RemoveContainer" containerID="ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.486682 4904 scope.go:117] "RemoveContainer" containerID="7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.502805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-config-data\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.503038 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bwh\" (UniqueName: \"kubernetes.io/projected/0641bd6c-e77d-4191-b380-7d389b00cd33-kube-api-access-v7bwh\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.503378 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-scripts\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.503546 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.503934 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-run-httpd\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.504074 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-log-httpd\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.504190 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.510979 4904 scope.go:117] "RemoveContainer" containerID="bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.511647 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b\": container with ID starting with bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b not found: ID does not exist" containerID="bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.511713 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b"} err="failed to get container status \"bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b\": rpc error: code = NotFound desc = could not find container \"bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b\": container with ID starting with bdf24d5749a4217969de02c27b065864c98b830f4977b17bcdb330b14cf7c82b not found: ID does not exist" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.511737 4904 scope.go:117] "RemoveContainer" containerID="5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.513863 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34\": container with ID starting with 5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34 not found: ID does not exist" containerID="5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.513928 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34"} err="failed to get container status \"5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34\": rpc error: code = NotFound desc = could not find container \"5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34\": container with ID starting with 5e91b0acbc51812355d2a3b022b75d88a9754298d1c75a8e6ff0d75bd7aeed34 not found: ID does not exist" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.513944 4904 scope.go:117] "RemoveContainer" containerID="ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.514231 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921\": container with ID starting with ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921 not found: ID does not exist" containerID="ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.514256 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921"} err="failed to get container status \"ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921\": rpc error: code = NotFound desc = could not find container \"ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921\": container with ID starting with ea823f4070fd6eb68869cb27694547da98eee59c88deea7eb181c7e1c1045921 not found: ID does not exist" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.514272 4904 scope.go:117] "RemoveContainer" containerID="7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d" Nov 21 13:57:40 crc kubenswrapper[4904]: E1121 13:57:40.514612 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d\": container with ID starting with 7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d not found: ID does not exist" containerID="7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.514633 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d"} err="failed to get container status \"7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d\": rpc error: code = NotFound desc = could not find container \"7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d\": container with ID starting with 7dc4165779e04232ed4f63ebbaf00c70383c675bbd7f8be070de26cc95209c7d not found: ID does not exist" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.525590 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12ae69a-f1da-42a3-811f-b8c2a95dde5a" path="/var/lib/kubelet/pods/d12ae69a-f1da-42a3-811f-b8c2a95dde5a/volumes" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-run-httpd\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-log-httpd\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606601 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606689 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-config-data\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606758 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bwh\" (UniqueName: \"kubernetes.io/projected/0641bd6c-e77d-4191-b380-7d389b00cd33-kube-api-access-v7bwh\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.606839 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-scripts\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.607304 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-run-httpd\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.607453 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-log-httpd\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.612871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.613110 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.613593 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-scripts\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.615143 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-config-data\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.633558 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bwh\" (UniqueName: \"kubernetes.io/projected/0641bd6c-e77d-4191-b380-7d389b00cd33-kube-api-access-v7bwh\") pod \"ceilometer-0\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " pod="openstack/ceilometer-0" Nov 21 13:57:40 crc kubenswrapper[4904]: I1121 13:57:40.732312 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:41 crc kubenswrapper[4904]: I1121 13:57:41.277318 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:41 crc kubenswrapper[4904]: W1121 13:57:41.289196 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0641bd6c_e77d_4191_b380_7d389b00cd33.slice/crio-03b8289ca64a5a080b2cf130635b2eee3b8f95c3b3f4710b138f75e0c390d8e4 WatchSource:0}: Error finding container 03b8289ca64a5a080b2cf130635b2eee3b8f95c3b3f4710b138f75e0c390d8e4: Status 404 returned error can't find the container with id 03b8289ca64a5a080b2cf130635b2eee3b8f95c3b3f4710b138f75e0c390d8e4 Nov 21 13:57:41 crc kubenswrapper[4904]: I1121 13:57:41.341702 4904 generic.go:334] "Generic (PLEG): container finished" podID="524ce65b-9914-4643-8132-50ee21805a8c" containerID="bc611156b6ae74e4b66bae9902dc7fd4f5bee5207383c65327f9866971fb5434" exitCode=0 Nov 21 13:57:41 crc kubenswrapper[4904]: I1121 13:57:41.341802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" event={"ID":"524ce65b-9914-4643-8132-50ee21805a8c","Type":"ContainerDied","Data":"bc611156b6ae74e4b66bae9902dc7fd4f5bee5207383c65327f9866971fb5434"} Nov 21 13:57:41 crc kubenswrapper[4904]: I1121 13:57:41.344788 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerStarted","Data":"03b8289ca64a5a080b2cf130635b2eee3b8f95c3b3f4710b138f75e0c390d8e4"} Nov 21 13:57:41 crc kubenswrapper[4904]: I1121 13:57:41.752303 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.358455 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerStarted","Data":"b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9"} Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.893698 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.961951 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-config-data\") pod \"524ce65b-9914-4643-8132-50ee21805a8c\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.962032 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-scripts\") pod \"524ce65b-9914-4643-8132-50ee21805a8c\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.962064 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pp2d\" (UniqueName: \"kubernetes.io/projected/524ce65b-9914-4643-8132-50ee21805a8c-kube-api-access-5pp2d\") pod \"524ce65b-9914-4643-8132-50ee21805a8c\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.962133 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-combined-ca-bundle\") pod \"524ce65b-9914-4643-8132-50ee21805a8c\" (UID: \"524ce65b-9914-4643-8132-50ee21805a8c\") " Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.970809 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-scripts" (OuterVolumeSpecName: "scripts") pod "524ce65b-9914-4643-8132-50ee21805a8c" (UID: "524ce65b-9914-4643-8132-50ee21805a8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:42 crc kubenswrapper[4904]: I1121 13:57:42.994768 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524ce65b-9914-4643-8132-50ee21805a8c-kube-api-access-5pp2d" (OuterVolumeSpecName: "kube-api-access-5pp2d") pod "524ce65b-9914-4643-8132-50ee21805a8c" (UID: "524ce65b-9914-4643-8132-50ee21805a8c"). InnerVolumeSpecName "kube-api-access-5pp2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.031779 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "524ce65b-9914-4643-8132-50ee21805a8c" (UID: "524ce65b-9914-4643-8132-50ee21805a8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.045017 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-config-data" (OuterVolumeSpecName: "config-data") pod "524ce65b-9914-4643-8132-50ee21805a8c" (UID: "524ce65b-9914-4643-8132-50ee21805a8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.065282 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.065525 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.065625 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pp2d\" (UniqueName: \"kubernetes.io/projected/524ce65b-9914-4643-8132-50ee21805a8c-kube-api-access-5pp2d\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.065715 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/524ce65b-9914-4643-8132-50ee21805a8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.375499 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" event={"ID":"524ce65b-9914-4643-8132-50ee21805a8c","Type":"ContainerDied","Data":"0675c134f402a4ba14c5c2517e87259158c98af68142eb48ed60612f0d2a74ba"} Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.376956 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0675c134f402a4ba14c5c2517e87259158c98af68142eb48ed60612f0d2a74ba" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.375589 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5mb5p" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.534764 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:43 crc kubenswrapper[4904]: E1121 13:57:43.535623 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ce65b-9914-4643-8132-50ee21805a8c" containerName="nova-cell0-conductor-db-sync" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.535709 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ce65b-9914-4643-8132-50ee21805a8c" containerName="nova-cell0-conductor-db-sync" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.535982 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="524ce65b-9914-4643-8132-50ee21805a8c" containerName="nova-cell0-conductor-db-sync" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.536885 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.540697 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.541435 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r2vn8" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.551193 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.679301 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6lh\" (UniqueName: \"kubernetes.io/projected/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-kube-api-access-tg6lh\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.679791 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.679824 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.786387 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg6lh\" (UniqueName: \"kubernetes.io/projected/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-kube-api-access-tg6lh\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.786844 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.786940 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.801780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.818412 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg6lh\" (UniqueName: \"kubernetes.io/projected/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-kube-api-access-tg6lh\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.819595 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:43 crc kubenswrapper[4904]: I1121 13:57:43.922731 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:44 crc kubenswrapper[4904]: I1121 13:57:44.387700 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerStarted","Data":"336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5"} Nov 21 13:57:44 crc kubenswrapper[4904]: I1121 13:57:44.388350 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerStarted","Data":"627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30"} Nov 21 13:57:44 crc kubenswrapper[4904]: I1121 13:57:44.459239 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:44 crc kubenswrapper[4904]: I1121 13:57:44.513212 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:57:44 crc kubenswrapper[4904]: E1121 13:57:44.513509 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:57:45 crc kubenswrapper[4904]: I1121 13:57:45.413049 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e","Type":"ContainerStarted","Data":"113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3"} Nov 21 13:57:45 crc kubenswrapper[4904]: I1121 13:57:45.413409 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e","Type":"ContainerStarted","Data":"b72d4573ed76c5a3e1ff2fec9fe3bd9fd3c48d3fa9eb74897badc88ae4d313d4"} Nov 21 13:57:45 crc kubenswrapper[4904]: I1121 13:57:45.414927 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:45 crc kubenswrapper[4904]: I1121 13:57:45.436477 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.436459583 podStartE2EDuration="2.436459583s" podCreationTimestamp="2025-11-21 13:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:45.432912706 +0000 UTC m=+1539.554445258" watchObservedRunningTime="2025-11-21 13:57:45.436459583 +0000 UTC m=+1539.557992135" Nov 21 13:57:46 crc kubenswrapper[4904]: I1121 13:57:46.426600 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerStarted","Data":"fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7"} Nov 21 13:57:46 crc kubenswrapper[4904]: I1121 13:57:46.427008 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-central-agent" containerID="cri-o://b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9" gracePeriod=30 Nov 21 13:57:46 crc kubenswrapper[4904]: I1121 13:57:46.427048 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="proxy-httpd" containerID="cri-o://fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7" gracePeriod=30 Nov 21 13:57:46 crc kubenswrapper[4904]: I1121 13:57:46.427087 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="sg-core" containerID="cri-o://336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5" gracePeriod=30 Nov 21 13:57:46 crc kubenswrapper[4904]: I1121 13:57:46.427113 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-notification-agent" containerID="cri-o://627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30" gracePeriod=30 Nov 21 13:57:46 crc kubenswrapper[4904]: I1121 13:57:46.469292 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.088268473 podStartE2EDuration="6.469262105s" podCreationTimestamp="2025-11-21 13:57:40 +0000 UTC" firstStartedPulling="2025-11-21 13:57:41.293203642 +0000 UTC m=+1535.414736194" lastFinishedPulling="2025-11-21 13:57:45.674197264 +0000 UTC m=+1539.795729826" observedRunningTime="2025-11-21 13:57:46.459938378 +0000 UTC m=+1540.581470940" watchObservedRunningTime="2025-11-21 13:57:46.469262105 +0000 UTC m=+1540.590794657" Nov 21 13:57:47 crc kubenswrapper[4904]: I1121 13:57:47.442554 4904 generic.go:334] "Generic (PLEG): container finished" podID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerID="fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7" exitCode=0 Nov 21 13:57:47 crc kubenswrapper[4904]: I1121 13:57:47.443034 4904 generic.go:334] "Generic (PLEG): container finished" podID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerID="336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5" exitCode=2 Nov 21 13:57:47 crc kubenswrapper[4904]: I1121 13:57:47.443044 4904 generic.go:334] "Generic (PLEG): container finished" podID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerID="627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30" exitCode=0 Nov 21 13:57:47 crc kubenswrapper[4904]: I1121 13:57:47.443370 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerDied","Data":"fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7"} Nov 21 13:57:47 crc kubenswrapper[4904]: I1121 13:57:47.443449 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerDied","Data":"336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5"} Nov 21 13:57:47 crc kubenswrapper[4904]: I1121 13:57:47.443466 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerDied","Data":"627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30"} Nov 21 13:57:48 crc kubenswrapper[4904]: I1121 13:57:48.815342 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 21 13:57:50 crc kubenswrapper[4904]: I1121 13:57:50.480285 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:50 crc kubenswrapper[4904]: I1121 13:57:50.481078 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" containerName="nova-cell0-conductor-conductor" containerID="cri-o://113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3" gracePeriod=30 Nov 21 13:57:50 crc kubenswrapper[4904]: E1121 13:57:50.487325 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 13:57:50 crc kubenswrapper[4904]: E1121 13:57:50.497573 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 13:57:50 crc kubenswrapper[4904]: E1121 13:57:50.499672 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 21 13:57:50 crc kubenswrapper[4904]: E1121 13:57:50.499757 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" containerName="nova-cell0-conductor-conductor" Nov 21 13:57:51 crc kubenswrapper[4904]: I1121 13:57:51.504159 4904 generic.go:334] "Generic (PLEG): container finished" podID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" containerID="113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3" exitCode=0 Nov 21 13:57:51 crc kubenswrapper[4904]: I1121 13:57:51.504209 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e","Type":"ContainerDied","Data":"113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3"} Nov 21 13:57:51 crc kubenswrapper[4904]: I1121 13:57:51.950629 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.106313 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg6lh\" (UniqueName: \"kubernetes.io/projected/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-kube-api-access-tg6lh\") pod \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.106467 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-combined-ca-bundle\") pod \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.106631 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-config-data\") pod \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\" (UID: \"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e\") " Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.131980 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-kube-api-access-tg6lh" (OuterVolumeSpecName: "kube-api-access-tg6lh") pod "ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" (UID: "ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e"). InnerVolumeSpecName "kube-api-access-tg6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.146900 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" (UID: "ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.167476 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-config-data" (OuterVolumeSpecName: "config-data") pod "ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" (UID: "ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.208930 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg6lh\" (UniqueName: \"kubernetes.io/projected/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-kube-api-access-tg6lh\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.208966 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.208975 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.519008 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.565124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e","Type":"ContainerDied","Data":"b72d4573ed76c5a3e1ff2fec9fe3bd9fd3c48d3fa9eb74897badc88ae4d313d4"} Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.565198 4904 scope.go:117] "RemoveContainer" containerID="113db31bf1b39e6d1a2702a368e6bad8cac1144c9b58842e1cedf3bebc765cd3" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.586414 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.616813 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.625958 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:52 crc kubenswrapper[4904]: E1121 13:57:52.626611 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" containerName="nova-cell0-conductor-conductor" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.626637 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" containerName="nova-cell0-conductor-conductor" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.626908 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" containerName="nova-cell0-conductor-conductor" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.628123 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.630350 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.630792 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r2vn8" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.639035 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.720687 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzh8r\" (UniqueName: \"kubernetes.io/projected/cbabfd9a-3db8-4b71-886c-1986df601c51-kube-api-access-xzh8r\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.720750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbabfd9a-3db8-4b71-886c-1986df601c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.720952 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbabfd9a-3db8-4b71-886c-1986df601c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.823356 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbabfd9a-3db8-4b71-886c-1986df601c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.823460 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzh8r\" (UniqueName: \"kubernetes.io/projected/cbabfd9a-3db8-4b71-886c-1986df601c51-kube-api-access-xzh8r\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.823492 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbabfd9a-3db8-4b71-886c-1986df601c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.828778 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbabfd9a-3db8-4b71-886c-1986df601c51-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.834135 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbabfd9a-3db8-4b71-886c-1986df601c51-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.842874 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzh8r\" (UniqueName: \"kubernetes.io/projected/cbabfd9a-3db8-4b71-886c-1986df601c51-kube-api-access-xzh8r\") pod \"nova-cell0-conductor-0\" (UID: \"cbabfd9a-3db8-4b71-886c-1986df601c51\") " pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:52 crc kubenswrapper[4904]: I1121 13:57:52.963534 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.383095 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-mx7lv"] Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.385225 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.403756 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-6c90-account-create-4nfs9"] Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.408170 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.417192 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.418188 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mx7lv"] Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.427800 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6c90-account-create-4nfs9"] Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.439605 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktnf\" (UniqueName: \"kubernetes.io/projected/ebe1209b-266b-4875-9356-4592af75e127-kube-api-access-8ktnf\") pod \"aodh-db-create-mx7lv\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.439720 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebe1209b-266b-4875-9356-4592af75e127-operator-scripts\") pod \"aodh-db-create-mx7lv\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.533627 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.541816 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktnf\" (UniqueName: \"kubernetes.io/projected/ebe1209b-266b-4875-9356-4592af75e127-kube-api-access-8ktnf\") pod \"aodh-db-create-mx7lv\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.542057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebe1209b-266b-4875-9356-4592af75e127-operator-scripts\") pod \"aodh-db-create-mx7lv\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.542207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-operator-scripts\") pod \"aodh-6c90-account-create-4nfs9\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.542411 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c29l\" (UniqueName: \"kubernetes.io/projected/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-kube-api-access-9c29l\") pod \"aodh-6c90-account-create-4nfs9\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.542930 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebe1209b-266b-4875-9356-4592af75e127-operator-scripts\") pod \"aodh-db-create-mx7lv\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.566438 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktnf\" (UniqueName: \"kubernetes.io/projected/ebe1209b-266b-4875-9356-4592af75e127-kube-api-access-8ktnf\") pod \"aodh-db-create-mx7lv\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.644849 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-operator-scripts\") pod \"aodh-6c90-account-create-4nfs9\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.645013 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c29l\" (UniqueName: \"kubernetes.io/projected/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-kube-api-access-9c29l\") pod \"aodh-6c90-account-create-4nfs9\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.646388 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-operator-scripts\") pod \"aodh-6c90-account-create-4nfs9\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.671231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c29l\" (UniqueName: \"kubernetes.io/projected/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-kube-api-access-9c29l\") pod \"aodh-6c90-account-create-4nfs9\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.729125 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:53 crc kubenswrapper[4904]: I1121 13:57:53.767720 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.379596 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-mx7lv"] Nov 21 13:57:54 crc kubenswrapper[4904]: W1121 13:57:54.384539 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebe1209b_266b_4875_9356_4592af75e127.slice/crio-f230bbdd5768bceb6f5587823b78e6cdb4aca4bbf0c604622f0fd3d1c1d5b9aa WatchSource:0}: Error finding container f230bbdd5768bceb6f5587823b78e6cdb4aca4bbf0c604622f0fd3d1c1d5b9aa: Status 404 returned error can't find the container with id f230bbdd5768bceb6f5587823b78e6cdb4aca4bbf0c604622f0fd3d1c1d5b9aa Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.416887 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-6c90-account-create-4nfs9"] Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.529011 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e" path="/var/lib/kubelet/pods/ba8689b8-7ff4-47bd-ac81-8b8228a9fd0e/volumes" Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.551093 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbabfd9a-3db8-4b71-886c-1986df601c51","Type":"ContainerStarted","Data":"a2e48f93945f61fd551225e147cb46a061b81a0171db69de96c4d623bbab3e80"} Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.551249 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.551279 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbabfd9a-3db8-4b71-886c-1986df601c51","Type":"ContainerStarted","Data":"fdd1c98232d0291781b173d0e33eccfccb71b170663b376c6485f1121303bf49"} Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.553814 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mx7lv" event={"ID":"ebe1209b-266b-4875-9356-4592af75e127","Type":"ContainerStarted","Data":"f230bbdd5768bceb6f5587823b78e6cdb4aca4bbf0c604622f0fd3d1c1d5b9aa"} Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.555322 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6c90-account-create-4nfs9" event={"ID":"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368","Type":"ContainerStarted","Data":"c5b677ca8b3cec29599f8cf5d27242f2884c0dc7d3e5fda79efb884133bcbd08"} Nov 21 13:57:54 crc kubenswrapper[4904]: I1121 13:57:54.581648 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.5815033 podStartE2EDuration="2.5815033s" podCreationTimestamp="2025-11-21 13:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:57:54.571974378 +0000 UTC m=+1548.693506930" watchObservedRunningTime="2025-11-21 13:57:54.5815033 +0000 UTC m=+1548.703035862" Nov 21 13:57:55 crc kubenswrapper[4904]: I1121 13:57:55.569071 4904 generic.go:334] "Generic (PLEG): container finished" podID="ebe1209b-266b-4875-9356-4592af75e127" containerID="e0df5bbca88a8967f9ccb8cc22faf16dab30cdc15c0941c9d0f7584d52e09762" exitCode=0 Nov 21 13:57:55 crc kubenswrapper[4904]: I1121 13:57:55.569164 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mx7lv" event={"ID":"ebe1209b-266b-4875-9356-4592af75e127","Type":"ContainerDied","Data":"e0df5bbca88a8967f9ccb8cc22faf16dab30cdc15c0941c9d0f7584d52e09762"} Nov 21 13:57:55 crc kubenswrapper[4904]: I1121 13:57:55.571272 4904 generic.go:334] "Generic (PLEG): container finished" podID="bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" containerID="256819aac8d815483e2d74177d76961c3c1dcd047f989b09384a5a7152a7033e" exitCode=0 Nov 21 13:57:55 crc kubenswrapper[4904]: I1121 13:57:55.571348 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6c90-account-create-4nfs9" event={"ID":"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368","Type":"ContainerDied","Data":"256819aac8d815483e2d74177d76961c3c1dcd047f989b09384a5a7152a7033e"} Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.062286 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.149739 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c29l\" (UniqueName: \"kubernetes.io/projected/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-kube-api-access-9c29l\") pod \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.149917 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-operator-scripts\") pod \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\" (UID: \"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.151108 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" (UID: "bed2edaa-f8f5-47fe-b8c5-94c81c3b6368"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.158612 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-kube-api-access-9c29l" (OuterVolumeSpecName: "kube-api-access-9c29l") pod "bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" (UID: "bed2edaa-f8f5-47fe-b8c5-94c81c3b6368"). InnerVolumeSpecName "kube-api-access-9c29l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.209447 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.222023 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.254224 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c29l\" (UniqueName: \"kubernetes.io/projected/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-kube-api-access-9c29l\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.254287 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.355960 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-run-httpd\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356119 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ktnf\" (UniqueName: \"kubernetes.io/projected/ebe1209b-266b-4875-9356-4592af75e127-kube-api-access-8ktnf\") pod \"ebe1209b-266b-4875-9356-4592af75e127\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356198 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-log-httpd\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356256 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356293 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-config-data\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356531 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebe1209b-266b-4875-9356-4592af75e127-operator-scripts\") pod \"ebe1209b-266b-4875-9356-4592af75e127\" (UID: \"ebe1209b-266b-4875-9356-4592af75e127\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356560 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7bwh\" (UniqueName: \"kubernetes.io/projected/0641bd6c-e77d-4191-b380-7d389b00cd33-kube-api-access-v7bwh\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356615 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-scripts\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.356845 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-sg-core-conf-yaml\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.357176 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebe1209b-266b-4875-9356-4592af75e127-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebe1209b-266b-4875-9356-4592af75e127" (UID: "ebe1209b-266b-4875-9356-4592af75e127"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.357444 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.357622 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.357890 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebe1209b-266b-4875-9356-4592af75e127-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.357921 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.361323 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe1209b-266b-4875-9356-4592af75e127-kube-api-access-8ktnf" (OuterVolumeSpecName: "kube-api-access-8ktnf") pod "ebe1209b-266b-4875-9356-4592af75e127" (UID: "ebe1209b-266b-4875-9356-4592af75e127"). InnerVolumeSpecName "kube-api-access-8ktnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.363132 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0641bd6c-e77d-4191-b380-7d389b00cd33-kube-api-access-v7bwh" (OuterVolumeSpecName: "kube-api-access-v7bwh") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "kube-api-access-v7bwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.364851 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-scripts" (OuterVolumeSpecName: "scripts") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.405321 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.458608 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.459414 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle\") pod \"0641bd6c-e77d-4191-b380-7d389b00cd33\" (UID: \"0641bd6c-e77d-4191-b380-7d389b00cd33\") " Nov 21 13:57:57 crc kubenswrapper[4904]: W1121 13:57:57.459606 4904 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/0641bd6c-e77d-4191-b380-7d389b00cd33/volumes/kubernetes.io~secret/combined-ca-bundle Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.459857 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.460314 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.460330 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.460344 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ktnf\" (UniqueName: \"kubernetes.io/projected/ebe1209b-266b-4875-9356-4592af75e127-kube-api-access-8ktnf\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.460355 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0641bd6c-e77d-4191-b380-7d389b00cd33-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.460364 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.460372 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7bwh\" (UniqueName: \"kubernetes.io/projected/0641bd6c-e77d-4191-b380-7d389b00cd33-kube-api-access-v7bwh\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.499778 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-config-data" (OuterVolumeSpecName: "config-data") pod "0641bd6c-e77d-4191-b380-7d389b00cd33" (UID: "0641bd6c-e77d-4191-b380-7d389b00cd33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.513281 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.514093 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.562712 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0641bd6c-e77d-4191-b380-7d389b00cd33-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.598207 4904 generic.go:334] "Generic (PLEG): container finished" podID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerID="b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9" exitCode=0 Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.598341 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerDied","Data":"b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9"} Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.598389 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0641bd6c-e77d-4191-b380-7d389b00cd33","Type":"ContainerDied","Data":"03b8289ca64a5a080b2cf130635b2eee3b8f95c3b3f4710b138f75e0c390d8e4"} Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.598402 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.598413 4904 scope.go:117] "RemoveContainer" containerID="fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.605404 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-mx7lv" event={"ID":"ebe1209b-266b-4875-9356-4592af75e127","Type":"ContainerDied","Data":"f230bbdd5768bceb6f5587823b78e6cdb4aca4bbf0c604622f0fd3d1c1d5b9aa"} Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.605464 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f230bbdd5768bceb6f5587823b78e6cdb4aca4bbf0c604622f0fd3d1c1d5b9aa" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.605560 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-mx7lv" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.609481 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-6c90-account-create-4nfs9" event={"ID":"bed2edaa-f8f5-47fe-b8c5-94c81c3b6368","Type":"ContainerDied","Data":"c5b677ca8b3cec29599f8cf5d27242f2884c0dc7d3e5fda79efb884133bcbd08"} Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.609521 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b677ca8b3cec29599f8cf5d27242f2884c0dc7d3e5fda79efb884133bcbd08" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.609588 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-6c90-account-create-4nfs9" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.660121 4904 scope.go:117] "RemoveContainer" containerID="336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.689003 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.715379 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748123 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.748731 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" containerName="mariadb-account-create" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748751 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" containerName="mariadb-account-create" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.748788 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-central-agent" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748796 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-central-agent" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.748823 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebe1209b-266b-4875-9356-4592af75e127" containerName="mariadb-database-create" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748831 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebe1209b-266b-4875-9356-4592af75e127" containerName="mariadb-database-create" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.748848 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-notification-agent" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748855 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-notification-agent" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.748868 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="proxy-httpd" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748874 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="proxy-httpd" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.748898 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="sg-core" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.748906 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="sg-core" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.749157 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebe1209b-266b-4875-9356-4592af75e127" containerName="mariadb-database-create" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.749171 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="proxy-httpd" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.749188 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="sg-core" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.749212 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-notification-agent" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.749226 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" containerName="ceilometer-central-agent" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.753916 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" containerName="mariadb-account-create" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.752882 4904 scope.go:117] "RemoveContainer" containerID="627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.775447 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.775642 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.778616 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.778989 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.799124 4904 scope.go:117] "RemoveContainer" containerID="b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.824039 4904 scope.go:117] "RemoveContainer" containerID="fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.825617 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7\": container with ID starting with fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7 not found: ID does not exist" containerID="fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.825713 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7"} err="failed to get container status \"fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7\": rpc error: code = NotFound desc = could not find container \"fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7\": container with ID starting with fc2429135a3688487ba2495e9f8ebc28354b9ab98e170db7fbf63394a09702f7 not found: ID does not exist" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.825779 4904 scope.go:117] "RemoveContainer" containerID="336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.826178 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5\": container with ID starting with 336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5 not found: ID does not exist" containerID="336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.826224 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5"} err="failed to get container status \"336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5\": rpc error: code = NotFound desc = could not find container \"336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5\": container with ID starting with 336c9ea0ad9b4066f3432a1ce59572360def7cc5a5d21cc57743ed2ee24754a5 not found: ID does not exist" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.826253 4904 scope.go:117] "RemoveContainer" containerID="627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.826558 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30\": container with ID starting with 627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30 not found: ID does not exist" containerID="627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.826621 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30"} err="failed to get container status \"627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30\": rpc error: code = NotFound desc = could not find container \"627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30\": container with ID starting with 627a4b36fbea5928aec36d446836626aededd49e37648d16f701b0b8fd23cc30 not found: ID does not exist" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.826644 4904 scope.go:117] "RemoveContainer" containerID="b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9" Nov 21 13:57:57 crc kubenswrapper[4904]: E1121 13:57:57.826931 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9\": container with ID starting with b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9 not found: ID does not exist" containerID="b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.826961 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9"} err="failed to get container status \"b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9\": rpc error: code = NotFound desc = could not find container \"b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9\": container with ID starting with b66a213fdc295187a2f9bd06c3b820d77eafb71be16e655696f6d496404858e9 not found: ID does not exist" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.888413 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.889229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.889379 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-config-data\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.889548 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-run-httpd\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.889741 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-log-httpd\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.889934 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkdkk\" (UniqueName: \"kubernetes.io/projected/e4439528-3e29-4bf5-bfa6-9305966ccd92-kube-api-access-tkdkk\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.889984 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-scripts\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.992862 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-run-httpd\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-log-httpd\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993107 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkdkk\" (UniqueName: \"kubernetes.io/projected/e4439528-3e29-4bf5-bfa6-9305966ccd92-kube-api-access-tkdkk\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993158 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-scripts\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993250 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993305 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993341 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-config-data\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993544 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-run-httpd\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.993819 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-log-httpd\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.999056 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-scripts\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.999120 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:57 crc kubenswrapper[4904]: I1121 13:57:57.999770 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-config-data\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.009117 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.026424 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkdkk\" (UniqueName: \"kubernetes.io/projected/e4439528-3e29-4bf5-bfa6-9305966ccd92-kube-api-access-tkdkk\") pod \"ceilometer-0\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " pod="openstack/ceilometer-0" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.102839 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.531033 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0641bd6c-e77d-4191-b380-7d389b00cd33" path="/var/lib/kubelet/pods/0641bd6c-e77d-4191-b380-7d389b00cd33/volumes" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.671737 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.716621 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-ff79s"] Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.718417 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.720521 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.720919 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.722226 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.722259 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lqfk6" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.739102 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-ff79s"] Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.814696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2n4c\" (UniqueName: \"kubernetes.io/projected/54872f28-88cc-4a28-936c-7271bc0ad2b1-kube-api-access-c2n4c\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.814866 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-config-data\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.814931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-scripts\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.815060 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-combined-ca-bundle\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.917879 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-scripts\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.918032 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-combined-ca-bundle\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.918157 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2n4c\" (UniqueName: \"kubernetes.io/projected/54872f28-88cc-4a28-936c-7271bc0ad2b1-kube-api-access-c2n4c\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.918271 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-config-data\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.930858 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-combined-ca-bundle\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.931927 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-config-data\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.942265 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-scripts\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:58 crc kubenswrapper[4904]: I1121 13:57:58.962144 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2n4c\" (UniqueName: \"kubernetes.io/projected/54872f28-88cc-4a28-936c-7271bc0ad2b1-kube-api-access-c2n4c\") pod \"aodh-db-sync-ff79s\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:59 crc kubenswrapper[4904]: I1121 13:57:59.076455 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ff79s" Nov 21 13:57:59 crc kubenswrapper[4904]: I1121 13:57:59.618569 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-ff79s"] Nov 21 13:57:59 crc kubenswrapper[4904]: I1121 13:57:59.636493 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerStarted","Data":"a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3"} Nov 21 13:57:59 crc kubenswrapper[4904]: I1121 13:57:59.636554 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerStarted","Data":"7e186acb86ca1dfeb7ac7e6c56fb08210210286ac43f0c6ab40b287a2196279f"} Nov 21 13:57:59 crc kubenswrapper[4904]: I1121 13:57:59.640741 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ff79s" event={"ID":"54872f28-88cc-4a28-936c-7271bc0ad2b1","Type":"ContainerStarted","Data":"7105d27f02e7ab0ff909f9822ec3c345561d6f6b449c8c640b572af4eb67b50b"} Nov 21 13:58:00 crc kubenswrapper[4904]: I1121 13:58:00.667200 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerStarted","Data":"306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18"} Nov 21 13:58:01 crc kubenswrapper[4904]: I1121 13:58:01.710838 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerStarted","Data":"3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c"} Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.007574 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.598593 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qqn46"] Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.600889 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.604133 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.604424 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.610742 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqn46"] Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.769432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-scripts\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.769558 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.769588 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-config-data\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.769615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwjkw\" (UniqueName: \"kubernetes.io/projected/648982b6-6d9e-4aa1-9ec1-6191871129bc-kube-api-access-mwjkw\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.812548 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.814267 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.820531 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.857458 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.873473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-scripts\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.873597 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.873619 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-config-data\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.873642 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwjkw\" (UniqueName: \"kubernetes.io/projected/648982b6-6d9e-4aa1-9ec1-6191871129bc-kube-api-access-mwjkw\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.889781 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-config-data\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.893469 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.898793 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-scripts\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.909548 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwjkw\" (UniqueName: \"kubernetes.io/projected/648982b6-6d9e-4aa1-9ec1-6191871129bc-kube-api-access-mwjkw\") pod \"nova-cell0-cell-mapping-qqn46\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.932218 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.982123 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47r54\" (UniqueName: \"kubernetes.io/projected/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-kube-api-access-47r54\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.982535 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-config-data\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.982597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:03 crc kubenswrapper[4904]: I1121 13:58:03.996903 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:03.999114 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.007908 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.050755 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scm8\" (UniqueName: \"kubernetes.io/projected/6a18bd76-074c-4265-afc7-24155421e2ff-kube-api-access-7scm8\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085258 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-config-data\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085303 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-config-data\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085321 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47r54\" (UniqueName: \"kubernetes.io/projected/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-kube-api-access-47r54\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085341 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085387 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.085458 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a18bd76-074c-4265-afc7-24155421e2ff-logs\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.103811 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.121996 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-config-data\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.126538 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47r54\" (UniqueName: \"kubernetes.io/projected/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-kube-api-access-47r54\") pod \"nova-scheduler-0\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.144867 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.148625 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.152609 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.167420 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.179161 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.187129 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-config-data\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.187209 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.187334 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a18bd76-074c-4265-afc7-24155421e2ff-logs\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.187395 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7scm8\" (UniqueName: \"kubernetes.io/projected/6a18bd76-074c-4265-afc7-24155421e2ff-kube-api-access-7scm8\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.192814 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a18bd76-074c-4265-afc7-24155421e2ff-logs\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.208613 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-config-data\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.211229 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.236379 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7scm8\" (UniqueName: \"kubernetes.io/projected/6a18bd76-074c-4265-afc7-24155421e2ff-kube-api-access-7scm8\") pod \"nova-metadata-0\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.289559 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-config-data\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.289649 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6d5d\" (UniqueName: \"kubernetes.io/projected/c823a88e-4fa1-451f-88f5-cd1397a4ae34-kube-api-access-x6d5d\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.289697 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.289737 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c823a88e-4fa1-451f-88f5-cd1397a4ae34-logs\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.351446 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8rrf"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.356244 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.365502 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.386356 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.387969 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.396008 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.402810 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6d5d\" (UniqueName: \"kubernetes.io/projected/c823a88e-4fa1-451f-88f5-cd1397a4ae34-kube-api-access-x6d5d\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.402986 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.403122 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c823a88e-4fa1-451f-88f5-cd1397a4ae34-logs\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.403490 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-config-data\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.404259 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c823a88e-4fa1-451f-88f5-cd1397a4ae34-logs\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.416008 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.418582 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-config-data\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.422496 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8rrf"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.424452 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6d5d\" (UniqueName: \"kubernetes.io/projected/c823a88e-4fa1-451f-88f5-cd1397a4ae34-kube-api-access-x6d5d\") pod \"nova-api-0\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.436171 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.506053 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.506155 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.506586 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-config\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.506797 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.506876 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdrs9\" (UniqueName: \"kubernetes.io/projected/04b80661-562e-4a69-ba70-15c22a4d6ece-kube-api-access-xdrs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.507205 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.507258 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmchw\" (UniqueName: \"kubernetes.io/projected/b6b4562e-b84b-4bff-9680-51341eb3864c-kube-api-access-pmchw\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.507366 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.507429 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609466 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609541 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmchw\" (UniqueName: \"kubernetes.io/projected/b6b4562e-b84b-4bff-9680-51341eb3864c-kube-api-access-pmchw\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609593 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609622 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609652 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609698 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609785 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-config\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609840 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.609873 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdrs9\" (UniqueName: \"kubernetes.io/projected/04b80661-562e-4a69-ba70-15c22a4d6ece-kube-api-access-xdrs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.610526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.611316 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.611526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-config\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.612154 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.612742 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.622445 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.634413 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.635352 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdrs9\" (UniqueName: \"kubernetes.io/projected/04b80661-562e-4a69-ba70-15c22a4d6ece-kube-api-access-xdrs9\") pod \"nova-cell1-novncproxy-0\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.665591 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmchw\" (UniqueName: \"kubernetes.io/projected/b6b4562e-b84b-4bff-9680-51341eb3864c-kube-api-access-pmchw\") pod \"dnsmasq-dns-9b86998b5-c8rrf\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.683306 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.686377 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:04 crc kubenswrapper[4904]: I1121 13:58:04.720546 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.433495 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smj2v"] Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.437669 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.440775 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.441258 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.446736 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smj2v"] Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.538644 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.538996 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-scripts\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.539094 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvv5\" (UniqueName: \"kubernetes.io/projected/f179df31-95c5-4ae1-8a64-60caefab9aea-kube-api-access-cnvv5\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.539197 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-config-data\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.642152 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-scripts\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.642310 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvv5\" (UniqueName: \"kubernetes.io/projected/f179df31-95c5-4ae1-8a64-60caefab9aea-kube-api-access-cnvv5\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.642436 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-config-data\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.642536 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.652315 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.656319 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-scripts\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.656890 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-config-data\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.662750 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvv5\" (UniqueName: \"kubernetes.io/projected/f179df31-95c5-4ae1-8a64-60caefab9aea-kube-api-access-cnvv5\") pod \"nova-cell1-conductor-db-sync-smj2v\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:05 crc kubenswrapper[4904]: I1121 13:58:05.772560 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:08 crc kubenswrapper[4904]: I1121 13:58:08.491463 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:08 crc kubenswrapper[4904]: I1121 13:58:08.502346 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:08 crc kubenswrapper[4904]: I1121 13:58:08.910367 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.823730 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.903105 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smj2v"] Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.914807 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8rrf"] Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.967419 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerStarted","Data":"534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507"} Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.967759 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.990955 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ff79s" event={"ID":"54872f28-88cc-4a28-936c-7271bc0ad2b1","Type":"ContainerStarted","Data":"73aa406094e52a1b1b90d94bb93a9751331f3cd6418c35513ed2b57d1fd38f05"} Nov 21 13:58:09 crc kubenswrapper[4904]: I1121 13:58:09.995798 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a18bd76-074c-4265-afc7-24155421e2ff","Type":"ContainerStarted","Data":"d54a2d02c4d67ab32e049e3e342ec0cba38874977f1101a6d8a68380eecc7b93"} Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.013989 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqn46"] Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.044011 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.707286457 podStartE2EDuration="13.043984996s" podCreationTimestamp="2025-11-21 13:57:57 +0000 UTC" firstStartedPulling="2025-11-21 13:57:58.687117713 +0000 UTC m=+1552.808650275" lastFinishedPulling="2025-11-21 13:58:09.023816272 +0000 UTC m=+1563.145348814" observedRunningTime="2025-11-21 13:58:09.999345789 +0000 UTC m=+1564.120878351" watchObservedRunningTime="2025-11-21 13:58:10.043984996 +0000 UTC m=+1564.165517548" Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.077119 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-ff79s" podStartSLOduration=2.797878036 podStartE2EDuration="12.077087852s" podCreationTimestamp="2025-11-21 13:57:58 +0000 UTC" firstStartedPulling="2025-11-21 13:57:59.625113585 +0000 UTC m=+1553.746646137" lastFinishedPulling="2025-11-21 13:58:08.904323401 +0000 UTC m=+1563.025855953" observedRunningTime="2025-11-21 13:58:10.020087854 +0000 UTC m=+1564.141620406" watchObservedRunningTime="2025-11-21 13:58:10.077087852 +0000 UTC m=+1564.198620404" Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.424905 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.434644 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.444639 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:10 crc kubenswrapper[4904]: W1121 13:58:10.460577 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04b80661_562e_4a69_ba70_15c22a4d6ece.slice/crio-b452d54fdc2d10553cb50d8e902ebe976cff5025e0fc3d30cb408dcf10b0a870 WatchSource:0}: Error finding container b452d54fdc2d10553cb50d8e902ebe976cff5025e0fc3d30cb408dcf10b0a870: Status 404 returned error can't find the container with id b452d54fdc2d10553cb50d8e902ebe976cff5025e0fc3d30cb408dcf10b0a870 Nov 21 13:58:10 crc kubenswrapper[4904]: W1121 13:58:10.467989 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03692bda_5fd5_425f_bce4_c39fd4ee5a1b.slice/crio-e3fc39047349a2491d8ab5a357c0ce3c3b2bd5494c8307f9c5faa6dee9692228 WatchSource:0}: Error finding container e3fc39047349a2491d8ab5a357c0ce3c3b2bd5494c8307f9c5faa6dee9692228: Status 404 returned error can't find the container with id e3fc39047349a2491d8ab5a357c0ce3c3b2bd5494c8307f9c5faa6dee9692228 Nov 21 13:58:10 crc kubenswrapper[4904]: W1121 13:58:10.471553 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc823a88e_4fa1_451f_88f5_cd1397a4ae34.slice/crio-e0c692865e60454d4ace7e106b8e14a58dc7c6e28cd0407da2d16bddc4af2c20 WatchSource:0}: Error finding container e0c692865e60454d4ace7e106b8e14a58dc7c6e28cd0407da2d16bddc4af2c20: Status 404 returned error can't find the container with id e0c692865e60454d4ace7e106b8e14a58dc7c6e28cd0407da2d16bddc4af2c20 Nov 21 13:58:10 crc kubenswrapper[4904]: I1121 13:58:10.522932 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:58:10 crc kubenswrapper[4904]: E1121 13:58:10.523242 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.010597 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smj2v" event={"ID":"f179df31-95c5-4ae1-8a64-60caefab9aea","Type":"ContainerStarted","Data":"bdac76f4d14d940974600f0038cd1de25e0dbd3c43b508a14b768c268586f2c6"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.010686 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smj2v" event={"ID":"f179df31-95c5-4ae1-8a64-60caefab9aea","Type":"ContainerStarted","Data":"78ea6a297b483dc9230f5d1d2b0b85895b15c306e3d912e2879dbe52fca48fee"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.012986 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqn46" event={"ID":"648982b6-6d9e-4aa1-9ec1-6191871129bc","Type":"ContainerStarted","Data":"a2a53d8e3514da576bfa67410dc7dfed5e47f8fff861832ce4fa0b89d226cf9a"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.013072 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqn46" event={"ID":"648982b6-6d9e-4aa1-9ec1-6191871129bc","Type":"ContainerStarted","Data":"52bda9dd231ad59a755c98f58f70ccf7ad567fb421d87fca06d565bd4b12c625"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.015832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c823a88e-4fa1-451f-88f5-cd1397a4ae34","Type":"ContainerStarted","Data":"e0c692865e60454d4ace7e106b8e14a58dc7c6e28cd0407da2d16bddc4af2c20"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.016774 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04b80661-562e-4a69-ba70-15c22a4d6ece","Type":"ContainerStarted","Data":"b452d54fdc2d10553cb50d8e902ebe976cff5025e0fc3d30cb408dcf10b0a870"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.017868 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03692bda-5fd5-425f-bce4-c39fd4ee5a1b","Type":"ContainerStarted","Data":"e3fc39047349a2491d8ab5a357c0ce3c3b2bd5494c8307f9c5faa6dee9692228"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.035796 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-smj2v" podStartSLOduration=6.035767878 podStartE2EDuration="6.035767878s" podCreationTimestamp="2025-11-21 13:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:11.030713475 +0000 UTC m=+1565.152246067" watchObservedRunningTime="2025-11-21 13:58:11.035767878 +0000 UTC m=+1565.157300430" Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.037769 4904 generic.go:334] "Generic (PLEG): container finished" podID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerID="9920714575c622c425b06946703efcd1674c685fbed6ae64cc874a21d48ae469" exitCode=0 Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.037928 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" event={"ID":"b6b4562e-b84b-4bff-9680-51341eb3864c","Type":"ContainerDied","Data":"9920714575c622c425b06946703efcd1674c685fbed6ae64cc874a21d48ae469"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.038072 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" event={"ID":"b6b4562e-b84b-4bff-9680-51341eb3864c","Type":"ContainerStarted","Data":"d111cc8a28852774899457d3e40fdb3814be479da0a61fc9372567a17b2deecd"} Nov 21 13:58:11 crc kubenswrapper[4904]: I1121 13:58:11.057939 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qqn46" podStartSLOduration=8.057915578 podStartE2EDuration="8.057915578s" podCreationTimestamp="2025-11-21 13:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:11.053192282 +0000 UTC m=+1565.174724834" watchObservedRunningTime="2025-11-21 13:58:11.057915578 +0000 UTC m=+1565.179448140" Nov 21 13:58:12 crc kubenswrapper[4904]: I1121 13:58:12.058782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" event={"ID":"b6b4562e-b84b-4bff-9680-51341eb3864c","Type":"ContainerStarted","Data":"ced64c0214f8a24937656987fb3ae64059618624c3497fd118a15dfe39249512"} Nov 21 13:58:12 crc kubenswrapper[4904]: I1121 13:58:12.059280 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:12 crc kubenswrapper[4904]: I1121 13:58:12.093519 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" podStartSLOduration=8.093496497 podStartE2EDuration="8.093496497s" podCreationTimestamp="2025-11-21 13:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:12.080293596 +0000 UTC m=+1566.201826158" watchObservedRunningTime="2025-11-21 13:58:12.093496497 +0000 UTC m=+1566.215029049" Nov 21 13:58:14 crc kubenswrapper[4904]: I1121 13:58:14.085928 4904 generic.go:334] "Generic (PLEG): container finished" podID="54872f28-88cc-4a28-936c-7271bc0ad2b1" containerID="73aa406094e52a1b1b90d94bb93a9751331f3cd6418c35513ed2b57d1fd38f05" exitCode=0 Nov 21 13:58:14 crc kubenswrapper[4904]: I1121 13:58:14.086467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ff79s" event={"ID":"54872f28-88cc-4a28-936c-7271bc0ad2b1","Type":"ContainerDied","Data":"73aa406094e52a1b1b90d94bb93a9751331f3cd6418c35513ed2b57d1fd38f05"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.109675 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a18bd76-074c-4265-afc7-24155421e2ff","Type":"ContainerStarted","Data":"381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.110201 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a18bd76-074c-4265-afc7-24155421e2ff","Type":"ContainerStarted","Data":"ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.110384 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-log" containerID="cri-o://ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8" gracePeriod=30 Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.111185 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-metadata" containerID="cri-o://381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2" gracePeriod=30 Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.121911 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c823a88e-4fa1-451f-88f5-cd1397a4ae34","Type":"ContainerStarted","Data":"59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.121976 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c823a88e-4fa1-451f-88f5-cd1397a4ae34","Type":"ContainerStarted","Data":"d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.126045 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04b80661-562e-4a69-ba70-15c22a4d6ece","Type":"ContainerStarted","Data":"d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.126256 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="04b80661-562e-4a69-ba70-15c22a4d6ece" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc" gracePeriod=30 Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.129776 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03692bda-5fd5-425f-bce4-c39fd4ee5a1b","Type":"ContainerStarted","Data":"254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881"} Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.158251 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.986867617 podStartE2EDuration="12.158220902s" podCreationTimestamp="2025-11-21 13:58:03 +0000 UTC" firstStartedPulling="2025-11-21 13:58:09.884422539 +0000 UTC m=+1564.005955091" lastFinishedPulling="2025-11-21 13:58:14.055775824 +0000 UTC m=+1568.177308376" observedRunningTime="2025-11-21 13:58:15.135353365 +0000 UTC m=+1569.256885917" watchObservedRunningTime="2025-11-21 13:58:15.158220902 +0000 UTC m=+1569.279753464" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.180618 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=7.580689284 podStartE2EDuration="11.180592037s" podCreationTimestamp="2025-11-21 13:58:04 +0000 UTC" firstStartedPulling="2025-11-21 13:58:10.465584887 +0000 UTC m=+1564.587117439" lastFinishedPulling="2025-11-21 13:58:14.06548764 +0000 UTC m=+1568.187020192" observedRunningTime="2025-11-21 13:58:15.162053595 +0000 UTC m=+1569.283586167" watchObservedRunningTime="2025-11-21 13:58:15.180592037 +0000 UTC m=+1569.302124599" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.205490 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=7.6279050139999995 podStartE2EDuration="11.205460153s" podCreationTimestamp="2025-11-21 13:58:04 +0000 UTC" firstStartedPulling="2025-11-21 13:58:10.479916956 +0000 UTC m=+1564.601449508" lastFinishedPulling="2025-11-21 13:58:14.057472095 +0000 UTC m=+1568.179004647" observedRunningTime="2025-11-21 13:58:15.190278253 +0000 UTC m=+1569.311810815" watchObservedRunningTime="2025-11-21 13:58:15.205460153 +0000 UTC m=+1569.326992715" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.222386 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=8.636431512 podStartE2EDuration="12.222355324s" podCreationTimestamp="2025-11-21 13:58:03 +0000 UTC" firstStartedPulling="2025-11-21 13:58:10.471694816 +0000 UTC m=+1564.593227358" lastFinishedPulling="2025-11-21 13:58:14.057618618 +0000 UTC m=+1568.179151170" observedRunningTime="2025-11-21 13:58:15.220118629 +0000 UTC m=+1569.341651181" watchObservedRunningTime="2025-11-21 13:58:15.222355324 +0000 UTC m=+1569.343887876" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.612082 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ff79s" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.719429 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-scripts\") pod \"54872f28-88cc-4a28-936c-7271bc0ad2b1\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.719761 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2n4c\" (UniqueName: \"kubernetes.io/projected/54872f28-88cc-4a28-936c-7271bc0ad2b1-kube-api-access-c2n4c\") pod \"54872f28-88cc-4a28-936c-7271bc0ad2b1\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.720060 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-combined-ca-bundle\") pod \"54872f28-88cc-4a28-936c-7271bc0ad2b1\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.720907 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-config-data\") pod \"54872f28-88cc-4a28-936c-7271bc0ad2b1\" (UID: \"54872f28-88cc-4a28-936c-7271bc0ad2b1\") " Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.729006 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-scripts" (OuterVolumeSpecName: "scripts") pod "54872f28-88cc-4a28-936c-7271bc0ad2b1" (UID: "54872f28-88cc-4a28-936c-7271bc0ad2b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.732248 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54872f28-88cc-4a28-936c-7271bc0ad2b1-kube-api-access-c2n4c" (OuterVolumeSpecName: "kube-api-access-c2n4c") pod "54872f28-88cc-4a28-936c-7271bc0ad2b1" (UID: "54872f28-88cc-4a28-936c-7271bc0ad2b1"). InnerVolumeSpecName "kube-api-access-c2n4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.758370 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54872f28-88cc-4a28-936c-7271bc0ad2b1" (UID: "54872f28-88cc-4a28-936c-7271bc0ad2b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.763047 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-config-data" (OuterVolumeSpecName: "config-data") pod "54872f28-88cc-4a28-936c-7271bc0ad2b1" (UID: "54872f28-88cc-4a28-936c-7271bc0ad2b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.823864 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.824230 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2n4c\" (UniqueName: \"kubernetes.io/projected/54872f28-88cc-4a28-936c-7271bc0ad2b1-kube-api-access-c2n4c\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.825106 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:15 crc kubenswrapper[4904]: I1121 13:58:15.825225 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54872f28-88cc-4a28-936c-7271bc0ad2b1-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:16 crc kubenswrapper[4904]: I1121 13:58:16.147352 4904 generic.go:334] "Generic (PLEG): container finished" podID="6a18bd76-074c-4265-afc7-24155421e2ff" containerID="ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8" exitCode=143 Nov 21 13:58:16 crc kubenswrapper[4904]: I1121 13:58:16.147452 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a18bd76-074c-4265-afc7-24155421e2ff","Type":"ContainerDied","Data":"ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8"} Nov 21 13:58:16 crc kubenswrapper[4904]: I1121 13:58:16.150243 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-ff79s" event={"ID":"54872f28-88cc-4a28-936c-7271bc0ad2b1","Type":"ContainerDied","Data":"7105d27f02e7ab0ff909f9822ec3c345561d6f6b449c8c640b572af4eb67b50b"} Nov 21 13:58:16 crc kubenswrapper[4904]: I1121 13:58:16.150283 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7105d27f02e7ab0ff909f9822ec3c345561d6f6b449c8c640b572af4eb67b50b" Nov 21 13:58:16 crc kubenswrapper[4904]: I1121 13:58:16.150410 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-ff79s" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.387077 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 13:58:18 crc kubenswrapper[4904]: E1121 13:58:18.388220 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54872f28-88cc-4a28-936c-7271bc0ad2b1" containerName="aodh-db-sync" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.388235 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="54872f28-88cc-4a28-936c-7271bc0ad2b1" containerName="aodh-db-sync" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.395336 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="54872f28-88cc-4a28-936c-7271bc0ad2b1" containerName="aodh-db-sync" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.397985 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.411602 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lqfk6" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.412432 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.412639 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.415206 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.591763 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.591831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-scripts\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.591915 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmjrh\" (UniqueName: \"kubernetes.io/projected/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-kube-api-access-dmjrh\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.591964 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-config-data\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.693633 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.693737 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-scripts\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.693821 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmjrh\" (UniqueName: \"kubernetes.io/projected/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-kube-api-access-dmjrh\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.693860 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-config-data\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.702396 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-scripts\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.703268 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.723132 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-config-data\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.735500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmjrh\" (UniqueName: \"kubernetes.io/projected/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-kube-api-access-dmjrh\") pod \"aodh-0\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " pod="openstack/aodh-0" Nov 21 13:58:18 crc kubenswrapper[4904]: I1121 13:58:18.745239 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.148987 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.366746 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.366854 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 13:58:19 crc kubenswrapper[4904]: W1121 13:58:19.395928 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d2a8d2c_7abe_41bf_ab48_48bfd5a703da.slice/crio-94a528bac75a88f87e4c56ce0c443267a27090349d89f229b4b16b42c75e7c0f WatchSource:0}: Error finding container 94a528bac75a88f87e4c56ce0c443267a27090349d89f229b4b16b42c75e7c0f: Status 404 returned error can't find the container with id 94a528bac75a88f87e4c56ce0c443267a27090349d89f229b4b16b42c75e7c0f Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.399195 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.688937 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.722357 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.775990 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5982r"] Nov 21 13:58:19 crc kubenswrapper[4904]: I1121 13:58:19.776286 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerName="dnsmasq-dns" containerID="cri-o://dd0355b0668e129c4a64c1d46c141a5af576d573c6cc94249e8312bf78c77353" gracePeriod=10 Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.212303 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerStarted","Data":"94a528bac75a88f87e4c56ce0c443267a27090349d89f229b4b16b42c75e7c0f"} Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.215312 4904 generic.go:334] "Generic (PLEG): container finished" podID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerID="dd0355b0668e129c4a64c1d46c141a5af576d573c6cc94249e8312bf78c77353" exitCode=0 Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.215355 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" event={"ID":"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9","Type":"ContainerDied","Data":"dd0355b0668e129c4a64c1d46c141a5af576d573c6cc94249e8312bf78c77353"} Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.522147 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.655748 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-nb\") pod \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.655964 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-svc\") pod \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.656059 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqwn7\" (UniqueName: \"kubernetes.io/projected/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-kube-api-access-tqwn7\") pod \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.656137 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-config\") pod \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.656252 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-swift-storage-0\") pod \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.656305 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-sb\") pod \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\" (UID: \"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9\") " Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.681943 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-kube-api-access-tqwn7" (OuterVolumeSpecName: "kube-api-access-tqwn7") pod "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" (UID: "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9"). InnerVolumeSpecName "kube-api-access-tqwn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.762631 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqwn7\" (UniqueName: \"kubernetes.io/projected/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-kube-api-access-tqwn7\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.843400 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" (UID: "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.869466 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.873333 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" (UID: "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.911212 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-config" (OuterVolumeSpecName: "config") pod "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" (UID: "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.915403 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" (UID: "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.918190 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" (UID: "89f9a34d-7ac6-4131-a9d2-af0019e6b5a9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.972303 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.972361 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.972372 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:20 crc kubenswrapper[4904]: I1121 13:58:20.972388 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.230200 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.230207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5982r" event={"ID":"89f9a34d-7ac6-4131-a9d2-af0019e6b5a9","Type":"ContainerDied","Data":"b0fab927f79b610d885a230e82bf4e59e6c6a63360bd58669dea924b45f4aeae"} Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.230275 4904 scope.go:117] "RemoveContainer" containerID="dd0355b0668e129c4a64c1d46c141a5af576d573c6cc94249e8312bf78c77353" Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.232109 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerStarted","Data":"91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9"} Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.302966 4904 scope.go:117] "RemoveContainer" containerID="ce128c5284eb63f237987df31b4fae389007729114d1f70ea64bef0475e07b58" Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.305636 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5982r"] Nov 21 13:58:21 crc kubenswrapper[4904]: I1121 13:58:21.320980 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5982r"] Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.121574 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.122105 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-central-agent" containerID="cri-o://a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3" gracePeriod=30 Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.122791 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="proxy-httpd" containerID="cri-o://534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507" gracePeriod=30 Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.122856 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="sg-core" containerID="cri-o://3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c" gracePeriod=30 Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.122892 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-notification-agent" containerID="cri-o://306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18" gracePeriod=30 Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.134986 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.221:3000/\": EOF" Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.253302 4904 generic.go:334] "Generic (PLEG): container finished" podID="f179df31-95c5-4ae1-8a64-60caefab9aea" containerID="bdac76f4d14d940974600f0038cd1de25e0dbd3c43b508a14b768c268586f2c6" exitCode=0 Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.253412 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smj2v" event={"ID":"f179df31-95c5-4ae1-8a64-60caefab9aea","Type":"ContainerDied","Data":"bdac76f4d14d940974600f0038cd1de25e0dbd3c43b508a14b768c268586f2c6"} Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.258050 4904 generic.go:334] "Generic (PLEG): container finished" podID="648982b6-6d9e-4aa1-9ec1-6191871129bc" containerID="a2a53d8e3514da576bfa67410dc7dfed5e47f8fff861832ce4fa0b89d226cf9a" exitCode=0 Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.258120 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqn46" event={"ID":"648982b6-6d9e-4aa1-9ec1-6191871129bc","Type":"ContainerDied","Data":"a2a53d8e3514da576bfa67410dc7dfed5e47f8fff861832ce4fa0b89d226cf9a"} Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.541409 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" path="/var/lib/kubelet/pods/89f9a34d-7ac6-4131-a9d2-af0019e6b5a9/volumes" Nov 21 13:58:22 crc kubenswrapper[4904]: I1121 13:58:22.601319 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.278759 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerID="534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507" exitCode=0 Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.279271 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerID="3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c" exitCode=2 Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.279280 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerID="a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3" exitCode=0 Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.279490 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerDied","Data":"534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507"} Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.279524 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerDied","Data":"3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c"} Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.279537 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerDied","Data":"a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3"} Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.745812 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.843497 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-combined-ca-bundle\") pod \"648982b6-6d9e-4aa1-9ec1-6191871129bc\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.843599 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwjkw\" (UniqueName: \"kubernetes.io/projected/648982b6-6d9e-4aa1-9ec1-6191871129bc-kube-api-access-mwjkw\") pod \"648982b6-6d9e-4aa1-9ec1-6191871129bc\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.843721 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-scripts\") pod \"648982b6-6d9e-4aa1-9ec1-6191871129bc\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.843796 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-config-data\") pod \"648982b6-6d9e-4aa1-9ec1-6191871129bc\" (UID: \"648982b6-6d9e-4aa1-9ec1-6191871129bc\") " Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.860907 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-scripts" (OuterVolumeSpecName: "scripts") pod "648982b6-6d9e-4aa1-9ec1-6191871129bc" (UID: "648982b6-6d9e-4aa1-9ec1-6191871129bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.863931 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648982b6-6d9e-4aa1-9ec1-6191871129bc-kube-api-access-mwjkw" (OuterVolumeSpecName: "kube-api-access-mwjkw") pod "648982b6-6d9e-4aa1-9ec1-6191871129bc" (UID: "648982b6-6d9e-4aa1-9ec1-6191871129bc"). InnerVolumeSpecName "kube-api-access-mwjkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.902859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "648982b6-6d9e-4aa1-9ec1-6191871129bc" (UID: "648982b6-6d9e-4aa1-9ec1-6191871129bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.935960 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-config-data" (OuterVolumeSpecName: "config-data") pod "648982b6-6d9e-4aa1-9ec1-6191871129bc" (UID: "648982b6-6d9e-4aa1-9ec1-6191871129bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.947400 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.947777 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwjkw\" (UniqueName: \"kubernetes.io/projected/648982b6-6d9e-4aa1-9ec1-6191871129bc-kube-api-access-mwjkw\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.947797 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.947807 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648982b6-6d9e-4aa1-9ec1-6191871129bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:23 crc kubenswrapper[4904]: I1121 13:58:23.986186 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.050408 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-combined-ca-bundle\") pod \"f179df31-95c5-4ae1-8a64-60caefab9aea\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.063802 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-config-data\") pod \"f179df31-95c5-4ae1-8a64-60caefab9aea\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.063862 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvv5\" (UniqueName: \"kubernetes.io/projected/f179df31-95c5-4ae1-8a64-60caefab9aea-kube-api-access-cnvv5\") pod \"f179df31-95c5-4ae1-8a64-60caefab9aea\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.063937 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-scripts\") pod \"f179df31-95c5-4ae1-8a64-60caefab9aea\" (UID: \"f179df31-95c5-4ae1-8a64-60caefab9aea\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.079141 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-scripts" (OuterVolumeSpecName: "scripts") pod "f179df31-95c5-4ae1-8a64-60caefab9aea" (UID: "f179df31-95c5-4ae1-8a64-60caefab9aea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.115998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f179df31-95c5-4ae1-8a64-60caefab9aea-kube-api-access-cnvv5" (OuterVolumeSpecName: "kube-api-access-cnvv5") pod "f179df31-95c5-4ae1-8a64-60caefab9aea" (UID: "f179df31-95c5-4ae1-8a64-60caefab9aea"). InnerVolumeSpecName "kube-api-access-cnvv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.118933 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-config-data" (OuterVolumeSpecName: "config-data") pod "f179df31-95c5-4ae1-8a64-60caefab9aea" (UID: "f179df31-95c5-4ae1-8a64-60caefab9aea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.130343 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f179df31-95c5-4ae1-8a64-60caefab9aea" (UID: "f179df31-95c5-4ae1-8a64-60caefab9aea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.160024 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.170540 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.170584 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.170597 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvv5\" (UniqueName: \"kubernetes.io/projected/f179df31-95c5-4ae1-8a64-60caefab9aea-kube-api-access-cnvv5\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.170609 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f179df31-95c5-4ae1-8a64-60caefab9aea-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.259874 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.332091 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-smj2v" event={"ID":"f179df31-95c5-4ae1-8a64-60caefab9aea","Type":"ContainerDied","Data":"78ea6a297b483dc9230f5d1d2b0b85895b15c306e3d912e2879dbe52fca48fee"} Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.332180 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ea6a297b483dc9230f5d1d2b0b85895b15c306e3d912e2879dbe52fca48fee" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.332311 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-smj2v" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378009 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdkk\" (UniqueName: \"kubernetes.io/projected/e4439528-3e29-4bf5-bfa6-9305966ccd92-kube-api-access-tkdkk\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378131 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-log-httpd\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378206 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-combined-ca-bundle\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378230 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-config-data\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378396 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-run-httpd\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378426 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-scripts\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.378462 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-sg-core-conf-yaml\") pod \"e4439528-3e29-4bf5-bfa6-9305966ccd92\" (UID: \"e4439528-3e29-4bf5-bfa6-9305966ccd92\") " Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.380055 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.380616 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.394134 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerStarted","Data":"1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5"} Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.394592 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-scripts" (OuterVolumeSpecName: "scripts") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.394637 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.405923 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4439528-3e29-4bf5-bfa6-9305966ccd92-kube-api-access-tkdkk" (OuterVolumeSpecName: "kube-api-access-tkdkk") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "kube-api-access-tkdkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.445115 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qqn46" event={"ID":"648982b6-6d9e-4aa1-9ec1-6191871129bc","Type":"ContainerDied","Data":"52bda9dd231ad59a755c98f58f70ccf7ad567fb421d87fca06d565bd4b12c625"} Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.445422 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52bda9dd231ad59a755c98f58f70ccf7ad567fb421d87fca06d565bd4b12c625" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.445575 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qqn46" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.452324 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerID="306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18" exitCode=0 Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.452468 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerDied","Data":"306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18"} Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.452507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e4439528-3e29-4bf5-bfa6-9305966ccd92","Type":"ContainerDied","Data":"7e186acb86ca1dfeb7ac7e6c56fb08210210286ac43f0c6ab40b287a2196279f"} Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.452528 4904 scope.go:117] "RemoveContainer" containerID="534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.452716 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.465522 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.487369 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e4439528-3e29-4bf5-bfa6-9305966ccd92-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.487393 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.487402 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkdkk\" (UniqueName: \"kubernetes.io/projected/e4439528-3e29-4bf5-bfa6-9305966ccd92-kube-api-access-tkdkk\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.579880 4904 scope.go:117] "RemoveContainer" containerID="3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.612254 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.673814 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.687943 4904 scope.go:117] "RemoveContainer" containerID="306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.688870 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.688904 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.694623 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695242 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerName="dnsmasq-dns" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695268 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerName="dnsmasq-dns" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695290 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="sg-core" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695298 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="sg-core" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695311 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="proxy-httpd" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695319 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="proxy-httpd" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695338 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648982b6-6d9e-4aa1-9ec1-6191871129bc" containerName="nova-manage" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695346 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="648982b6-6d9e-4aa1-9ec1-6191871129bc" containerName="nova-manage" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695379 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-central-agent" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695390 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-central-agent" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695405 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerName="init" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695413 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerName="init" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695429 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f179df31-95c5-4ae1-8a64-60caefab9aea" containerName="nova-cell1-conductor-db-sync" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695437 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f179df31-95c5-4ae1-8a64-60caefab9aea" containerName="nova-cell1-conductor-db-sync" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.695445 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-notification-agent" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.695453 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-notification-agent" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.697969 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f9a34d-7ac6-4131-a9d2-af0019e6b5a9" containerName="dnsmasq-dns" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.698016 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f179df31-95c5-4ae1-8a64-60caefab9aea" containerName="nova-cell1-conductor-db-sync" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.698034 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-central-agent" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.698048 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="sg-core" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.698074 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="proxy-httpd" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.698093 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" containerName="ceilometer-notification-agent" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.698105 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="648982b6-6d9e-4aa1-9ec1-6191871129bc" containerName="nova-manage" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.699358 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.699713 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.703695 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.709183 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.748456 4904 scope.go:117] "RemoveContainer" containerID="a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.748648 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.757923 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-config-data" (OuterVolumeSpecName: "config-data") pod "e4439528-3e29-4bf5-bfa6-9305966ccd92" (UID: "e4439528-3e29-4bf5-bfa6-9305966ccd92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.774973 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.796892 4904 scope.go:117] "RemoveContainer" containerID="534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.799122 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507\": container with ID starting with 534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507 not found: ID does not exist" containerID="534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.799164 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507"} err="failed to get container status \"534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507\": rpc error: code = NotFound desc = could not find container \"534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507\": container with ID starting with 534036a0d2f4649410a76cc15182c556fa1e1d75b0669d6bbd2d8b1932135507 not found: ID does not exist" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.799193 4904 scope.go:117] "RemoveContainer" containerID="3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.802948 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c\": container with ID starting with 3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c not found: ID does not exist" containerID="3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.802976 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c"} err="failed to get container status \"3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c\": rpc error: code = NotFound desc = could not find container \"3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c\": container with ID starting with 3569ccc8e4988aaccce9e74089e0d2bee1f1f35229809d8328e83096f046be6c not found: ID does not exist" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.802993 4904 scope.go:117] "RemoveContainer" containerID="306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.803721 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.804681 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e907ba-30e5-4c8e-921d-6560b56a80d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.804853 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e907ba-30e5-4c8e-921d-6560b56a80d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.805180 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2clt7\" (UniqueName: \"kubernetes.io/projected/89e907ba-30e5-4c8e-921d-6560b56a80d8-kube-api-access-2clt7\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.805897 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18\": container with ID starting with 306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18 not found: ID does not exist" containerID="306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.805921 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18"} err="failed to get container status \"306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18\": rpc error: code = NotFound desc = could not find container \"306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18\": container with ID starting with 306e7128a2b68b2b9865a9c3bc4ec1ff9a4ae1098da38cbaaa9b72c876742c18 not found: ID does not exist" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.805935 4904 scope.go:117] "RemoveContainer" containerID="a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3" Nov 21 13:58:24 crc kubenswrapper[4904]: E1121 13:58:24.807125 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3\": container with ID starting with a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3 not found: ID does not exist" containerID="a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.807145 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3"} err="failed to get container status \"a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3\": rpc error: code = NotFound desc = could not find container \"a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3\": container with ID starting with a7e08a216e742b4a4366aadd7a5b90350655c40f75c51d370598c7749091c5d3 not found: ID does not exist" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.807128 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.807185 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4439528-3e29-4bf5-bfa6-9305966ccd92-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.909370 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2clt7\" (UniqueName: \"kubernetes.io/projected/89e907ba-30e5-4c8e-921d-6560b56a80d8-kube-api-access-2clt7\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.909533 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e907ba-30e5-4c8e-921d-6560b56a80d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.909588 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e907ba-30e5-4c8e-921d-6560b56a80d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.916799 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89e907ba-30e5-4c8e-921d-6560b56a80d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.922257 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e907ba-30e5-4c8e-921d-6560b56a80d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:24 crc kubenswrapper[4904]: I1121 13:58:24.932951 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2clt7\" (UniqueName: \"kubernetes.io/projected/89e907ba-30e5-4c8e-921d-6560b56a80d8-kube-api-access-2clt7\") pod \"nova-cell1-conductor-0\" (UID: \"89e907ba-30e5-4c8e-921d-6560b56a80d8\") " pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.018370 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.106019 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.151156 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.218891 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.236225 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.241688 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.241921 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.256163 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.323257 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-run-httpd\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.331467 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.331641 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfx7\" (UniqueName: \"kubernetes.io/projected/0fa2fd43-5704-4d6b-bb77-92b551fbd960-kube-api-access-cnfx7\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.332018 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.332151 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-scripts\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.332208 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-config-data\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.332283 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-log-httpd\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435402 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435482 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-scripts\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435515 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-config-data\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-log-httpd\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435665 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-run-httpd\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435696 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.435737 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnfx7\" (UniqueName: \"kubernetes.io/projected/0fa2fd43-5704-4d6b-bb77-92b551fbd960-kube-api-access-cnfx7\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.437118 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-log-httpd\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.440434 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-run-httpd\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.467502 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-scripts\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.478260 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.479759 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-config-data\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.481640 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnfx7\" (UniqueName: \"kubernetes.io/projected/0fa2fd43-5704-4d6b-bb77-92b551fbd960-kube-api-access-cnfx7\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.483460 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.494080 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-log" containerID="cri-o://d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e" gracePeriod=30 Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.496065 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-api" containerID="cri-o://59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39" gracePeriod=30 Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.513486 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": EOF" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.514612 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": EOF" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.515364 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:58:25 crc kubenswrapper[4904]: E1121 13:58:25.515594 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:58:25 crc kubenswrapper[4904]: E1121 13:58:25.550139 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4439528_3e29_4bf5_bfa6_9305966ccd92.slice/crio-7e186acb86ca1dfeb7ac7e6c56fb08210210286ac43f0c6ab40b287a2196279f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4439528_3e29_4bf5_bfa6_9305966ccd92.slice\": RecentStats: unable to find data in memory cache]" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.567483 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.676337 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:25 crc kubenswrapper[4904]: I1121 13:58:25.770403 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 21 13:58:26 crc kubenswrapper[4904]: I1121 13:58:26.521201 4904 generic.go:334] "Generic (PLEG): container finished" podID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerID="d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e" exitCode=143 Nov 21 13:58:26 crc kubenswrapper[4904]: I1121 13:58:26.532151 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" containerName="nova-scheduler-scheduler" containerID="cri-o://254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881" gracePeriod=30 Nov 21 13:58:26 crc kubenswrapper[4904]: I1121 13:58:26.535860 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4439528-3e29-4bf5-bfa6-9305966ccd92" path="/var/lib/kubelet/pods/e4439528-3e29-4bf5-bfa6-9305966ccd92/volumes" Nov 21 13:58:26 crc kubenswrapper[4904]: I1121 13:58:26.537136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"89e907ba-30e5-4c8e-921d-6560b56a80d8","Type":"ContainerStarted","Data":"c47c8e8030f78bbb3aaceede185b34b77a44d5a2c8eb2c151dcd4c9134573981"} Nov 21 13:58:26 crc kubenswrapper[4904]: I1121 13:58:26.537187 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c823a88e-4fa1-451f-88f5-cd1397a4ae34","Type":"ContainerDied","Data":"d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e"} Nov 21 13:58:26 crc kubenswrapper[4904]: I1121 13:58:26.671132 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:26 crc kubenswrapper[4904]: W1121 13:58:26.672832 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa2fd43_5704_4d6b_bb77_92b551fbd960.slice/crio-fd8255bfe778162ed728f4531da11e39c8ec2686ec9455250963dbf1352bfd21 WatchSource:0}: Error finding container fd8255bfe778162ed728f4531da11e39c8ec2686ec9455250963dbf1352bfd21: Status 404 returned error can't find the container with id fd8255bfe778162ed728f4531da11e39c8ec2686ec9455250963dbf1352bfd21 Nov 21 13:58:27 crc kubenswrapper[4904]: I1121 13:58:27.551523 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerStarted","Data":"fd8255bfe778162ed728f4531da11e39c8ec2686ec9455250963dbf1352bfd21"} Nov 21 13:58:27 crc kubenswrapper[4904]: I1121 13:58:27.555284 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"89e907ba-30e5-4c8e-921d-6560b56a80d8","Type":"ContainerStarted","Data":"3a48b96f09e661151a2a9546dfaf2311ed51f8695464c776dde33f5cc7833ada"} Nov 21 13:58:27 crc kubenswrapper[4904]: I1121 13:58:27.557648 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:27 crc kubenswrapper[4904]: I1121 13:58:27.583697 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerStarted","Data":"af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9"} Nov 21 13:58:27 crc kubenswrapper[4904]: I1121 13:58:27.587100 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.58707631 podStartE2EDuration="3.58707631s" podCreationTimestamp="2025-11-21 13:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:27.582994571 +0000 UTC m=+1581.704527123" watchObservedRunningTime="2025-11-21 13:58:27.58707631 +0000 UTC m=+1581.708608862" Nov 21 13:58:28 crc kubenswrapper[4904]: I1121 13:58:28.617694 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerStarted","Data":"a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1"} Nov 21 13:58:29 crc kubenswrapper[4904]: E1121 13:58:29.150131 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:58:29 crc kubenswrapper[4904]: E1121 13:58:29.152580 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:58:29 crc kubenswrapper[4904]: E1121 13:58:29.157001 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:58:29 crc kubenswrapper[4904]: E1121 13:58:29.157120 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" containerName="nova-scheduler-scheduler" Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.641291 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerStarted","Data":"471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f"} Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.641468 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-api" containerID="cri-o://91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9" gracePeriod=30 Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.641518 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-listener" containerID="cri-o://471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f" gracePeriod=30 Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.641554 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-notifier" containerID="cri-o://af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9" gracePeriod=30 Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.641565 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-evaluator" containerID="cri-o://1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5" gracePeriod=30 Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.645812 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerStarted","Data":"1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9"} Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.645874 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerStarted","Data":"d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4"} Nov 21 13:58:30 crc kubenswrapper[4904]: I1121 13:58:30.675008 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.982371735 podStartE2EDuration="12.674973951s" podCreationTimestamp="2025-11-21 13:58:18 +0000 UTC" firstStartedPulling="2025-11-21 13:58:19.39988211 +0000 UTC m=+1573.521414652" lastFinishedPulling="2025-11-21 13:58:29.092484316 +0000 UTC m=+1583.214016868" observedRunningTime="2025-11-21 13:58:30.665360566 +0000 UTC m=+1584.786893118" watchObservedRunningTime="2025-11-21 13:58:30.674973951 +0000 UTC m=+1584.796506513" Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.717292 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerID="af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9" exitCode=0 Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.717763 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerID="1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5" exitCode=0 Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.717776 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerID="91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9" exitCode=0 Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.717770 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerDied","Data":"af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9"} Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.717832 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerDied","Data":"1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5"} Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.717844 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerDied","Data":"91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9"} Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.727872 4904 generic.go:334] "Generic (PLEG): container finished" podID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" containerID="254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881" exitCode=0 Nov 21 13:58:31 crc kubenswrapper[4904]: I1121 13:58:31.727964 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03692bda-5fd5-425f-bce4-c39fd4ee5a1b","Type":"ContainerDied","Data":"254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881"} Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.258994 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.353199 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-combined-ca-bundle\") pod \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.353791 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47r54\" (UniqueName: \"kubernetes.io/projected/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-kube-api-access-47r54\") pod \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.353949 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-config-data\") pod \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\" (UID: \"03692bda-5fd5-425f-bce4-c39fd4ee5a1b\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.365013 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-kube-api-access-47r54" (OuterVolumeSpecName: "kube-api-access-47r54") pod "03692bda-5fd5-425f-bce4-c39fd4ee5a1b" (UID: "03692bda-5fd5-425f-bce4-c39fd4ee5a1b"). InnerVolumeSpecName "kube-api-access-47r54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.396012 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-config-data" (OuterVolumeSpecName: "config-data") pod "03692bda-5fd5-425f-bce4-c39fd4ee5a1b" (UID: "03692bda-5fd5-425f-bce4-c39fd4ee5a1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.414150 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03692bda-5fd5-425f-bce4-c39fd4ee5a1b" (UID: "03692bda-5fd5-425f-bce4-c39fd4ee5a1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.459944 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.459994 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47r54\" (UniqueName: \"kubernetes.io/projected/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-kube-api-access-47r54\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.460007 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03692bda-5fd5-425f-bce4-c39fd4ee5a1b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.720234 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.748935 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.748938 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"03692bda-5fd5-425f-bce4-c39fd4ee5a1b","Type":"ContainerDied","Data":"e3fc39047349a2491d8ab5a357c0ce3c3b2bd5494c8307f9c5faa6dee9692228"} Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.749074 4904 scope.go:117] "RemoveContainer" containerID="254f6f1a72df6bb428f10349938769100b4faad7680a7d9f16c2e2134a113881" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.757322 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerStarted","Data":"fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4"} Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.757949 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-central-agent" containerID="cri-o://a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1" gracePeriod=30 Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.758236 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.758308 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="proxy-httpd" containerID="cri-o://fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4" gracePeriod=30 Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.758370 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="sg-core" containerID="cri-o://1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9" gracePeriod=30 Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.758433 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-notification-agent" containerID="cri-o://d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4" gracePeriod=30 Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.787644 4904 generic.go:334] "Generic (PLEG): container finished" podID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerID="59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39" exitCode=0 Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.787730 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c823a88e-4fa1-451f-88f5-cd1397a4ae34","Type":"ContainerDied","Data":"59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39"} Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.787776 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c823a88e-4fa1-451f-88f5-cd1397a4ae34","Type":"ContainerDied","Data":"e0c692865e60454d4ace7e106b8e14a58dc7c6e28cd0407da2d16bddc4af2c20"} Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.787864 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.797625 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.822982 4904 scope.go:117] "RemoveContainer" containerID="59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.830206 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.864709 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:32 crc kubenswrapper[4904]: E1121 13:58:32.865508 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" containerName="nova-scheduler-scheduler" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.865528 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" containerName="nova-scheduler-scheduler" Nov 21 13:58:32 crc kubenswrapper[4904]: E1121 13:58:32.865574 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-api" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.865584 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-api" Nov 21 13:58:32 crc kubenswrapper[4904]: E1121 13:58:32.865619 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-log" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.865629 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-log" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.866053 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" containerName="nova-scheduler-scheduler" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.866097 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-log" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.866126 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" containerName="nova-api-api" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.866100 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.720492551 podStartE2EDuration="7.866076211s" podCreationTimestamp="2025-11-21 13:58:25 +0000 UTC" firstStartedPulling="2025-11-21 13:58:26.676258781 +0000 UTC m=+1580.797791323" lastFinishedPulling="2025-11-21 13:58:31.821842431 +0000 UTC m=+1585.943374983" observedRunningTime="2025-11-21 13:58:32.812444114 +0000 UTC m=+1586.933976666" watchObservedRunningTime="2025-11-21 13:58:32.866076211 +0000 UTC m=+1586.987608753" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.867589 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.869691 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c823a88e-4fa1-451f-88f5-cd1397a4ae34-logs\") pod \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.870010 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-config-data\") pod \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.870060 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-combined-ca-bundle\") pod \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.870260 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6d5d\" (UniqueName: \"kubernetes.io/projected/c823a88e-4fa1-451f-88f5-cd1397a4ae34-kube-api-access-x6d5d\") pod \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\" (UID: \"c823a88e-4fa1-451f-88f5-cd1397a4ae34\") " Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.870283 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c823a88e-4fa1-451f-88f5-cd1397a4ae34-logs" (OuterVolumeSpecName: "logs") pod "c823a88e-4fa1-451f-88f5-cd1397a4ae34" (UID: "c823a88e-4fa1-451f-88f5-cd1397a4ae34"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.870832 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c823a88e-4fa1-451f-88f5-cd1397a4ae34-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.876931 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.887198 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c823a88e-4fa1-451f-88f5-cd1397a4ae34-kube-api-access-x6d5d" (OuterVolumeSpecName: "kube-api-access-x6d5d") pod "c823a88e-4fa1-451f-88f5-cd1397a4ae34" (UID: "c823a88e-4fa1-451f-88f5-cd1397a4ae34"). InnerVolumeSpecName "kube-api-access-x6d5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.906743 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.924240 4904 scope.go:117] "RemoveContainer" containerID="d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.925566 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-config-data" (OuterVolumeSpecName: "config-data") pod "c823a88e-4fa1-451f-88f5-cd1397a4ae34" (UID: "c823a88e-4fa1-451f-88f5-cd1397a4ae34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.950418 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c823a88e-4fa1-451f-88f5-cd1397a4ae34" (UID: "c823a88e-4fa1-451f-88f5-cd1397a4ae34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.962117 4904 scope.go:117] "RemoveContainer" containerID="59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39" Nov 21 13:58:32 crc kubenswrapper[4904]: E1121 13:58:32.962819 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39\": container with ID starting with 59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39 not found: ID does not exist" containerID="59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.962869 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39"} err="failed to get container status \"59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39\": rpc error: code = NotFound desc = could not find container \"59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39\": container with ID starting with 59c7fba6d338172a90030c178fb0453a0965c4844908e726c7d7c9d2fabeec39 not found: ID does not exist" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.962902 4904 scope.go:117] "RemoveContainer" containerID="d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e" Nov 21 13:58:32 crc kubenswrapper[4904]: E1121 13:58:32.963118 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e\": container with ID starting with d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e not found: ID does not exist" containerID="d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.963141 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e"} err="failed to get container status \"d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e\": rpc error: code = NotFound desc = could not find container \"d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e\": container with ID starting with d1332abe9d5445990d690b3c046ea2b2046092cf329e5d8caf05590eeff82b7e not found: ID does not exist" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.972544 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-config-data\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.972938 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.973062 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82nwk\" (UniqueName: \"kubernetes.io/projected/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-kube-api-access-82nwk\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.973462 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.973563 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c823a88e-4fa1-451f-88f5-cd1397a4ae34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:32 crc kubenswrapper[4904]: I1121 13:58:32.973673 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6d5d\" (UniqueName: \"kubernetes.io/projected/c823a88e-4fa1-451f-88f5-cd1397a4ae34-kube-api-access-x6d5d\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.075895 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-config-data\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.075956 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.075993 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82nwk\" (UniqueName: \"kubernetes.io/projected/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-kube-api-access-82nwk\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.080377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.080535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-config-data\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.095616 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82nwk\" (UniqueName: \"kubernetes.io/projected/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-kube-api-access-82nwk\") pod \"nova-scheduler-0\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.125161 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.135631 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.159348 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.164896 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.170690 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.171229 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.232476 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.279592 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-config-data\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.279667 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.279692 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313456af-18a0-4248-adc2-3480a25d2e3f-logs\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.279729 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xn6t\" (UniqueName: \"kubernetes.io/projected/313456af-18a0-4248-adc2-3480a25d2e3f-kube-api-access-4xn6t\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.382012 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-config-data\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.382369 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.382392 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313456af-18a0-4248-adc2-3480a25d2e3f-logs\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.382423 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xn6t\" (UniqueName: \"kubernetes.io/projected/313456af-18a0-4248-adc2-3480a25d2e3f-kube-api-access-4xn6t\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.384707 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313456af-18a0-4248-adc2-3480a25d2e3f-logs\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.386387 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-config-data\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.402314 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.406885 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xn6t\" (UniqueName: \"kubernetes.io/projected/313456af-18a0-4248-adc2-3480a25d2e3f-kube-api-access-4xn6t\") pod \"nova-api-0\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.482556 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.782285 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.801599 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4","Type":"ContainerStarted","Data":"82d41089d5b62323d945e248edc960094f41197d093442972b1e93643a7e345a"} Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.812153 4904 generic.go:334] "Generic (PLEG): container finished" podID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerID="fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4" exitCode=0 Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.812191 4904 generic.go:334] "Generic (PLEG): container finished" podID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerID="1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9" exitCode=2 Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.812201 4904 generic.go:334] "Generic (PLEG): container finished" podID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerID="d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4" exitCode=0 Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.812226 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerDied","Data":"fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4"} Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.812258 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerDied","Data":"1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9"} Nov 21 13:58:33 crc kubenswrapper[4904]: I1121 13:58:33.812268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerDied","Data":"d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4"} Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.014456 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:34 crc kubenswrapper[4904]: W1121 13:58:34.016718 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod313456af_18a0_4248_adc2_3480a25d2e3f.slice/crio-4005a08bfda36f99feaccf27ef77ddeea35f522bf44439d2b475b2cfeca6465b WatchSource:0}: Error finding container 4005a08bfda36f99feaccf27ef77ddeea35f522bf44439d2b475b2cfeca6465b: Status 404 returned error can't find the container with id 4005a08bfda36f99feaccf27ef77ddeea35f522bf44439d2b475b2cfeca6465b Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.531253 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03692bda-5fd5-425f-bce4-c39fd4ee5a1b" path="/var/lib/kubelet/pods/03692bda-5fd5-425f-bce4-c39fd4ee5a1b/volumes" Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.532066 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c823a88e-4fa1-451f-88f5-cd1397a4ae34" path="/var/lib/kubelet/pods/c823a88e-4fa1-451f-88f5-cd1397a4ae34/volumes" Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.828781 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4","Type":"ContainerStarted","Data":"6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630"} Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.835041 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313456af-18a0-4248-adc2-3480a25d2e3f","Type":"ContainerStarted","Data":"af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5"} Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.835107 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313456af-18a0-4248-adc2-3480a25d2e3f","Type":"ContainerStarted","Data":"485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157"} Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.835118 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313456af-18a0-4248-adc2-3480a25d2e3f","Type":"ContainerStarted","Data":"4005a08bfda36f99feaccf27ef77ddeea35f522bf44439d2b475b2cfeca6465b"} Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.871014 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.870988106 podStartE2EDuration="2.870988106s" podCreationTimestamp="2025-11-21 13:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:34.860383288 +0000 UTC m=+1588.981915840" watchObservedRunningTime="2025-11-21 13:58:34.870988106 +0000 UTC m=+1588.992520658" Nov 21 13:58:34 crc kubenswrapper[4904]: I1121 13:58:34.893624 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8935942369999998 podStartE2EDuration="1.893594237s" podCreationTimestamp="2025-11-21 13:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:34.886239057 +0000 UTC m=+1589.007771609" watchObservedRunningTime="2025-11-21 13:58:34.893594237 +0000 UTC m=+1589.015126789" Nov 21 13:58:35 crc kubenswrapper[4904]: I1121 13:58:35.076573 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.476348 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605290 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnfx7\" (UniqueName: \"kubernetes.io/projected/0fa2fd43-5704-4d6b-bb77-92b551fbd960-kube-api-access-cnfx7\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605362 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-run-httpd\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605429 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-config-data\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605479 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-combined-ca-bundle\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605636 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-scripts\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605696 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-log-httpd\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605720 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-sg-core-conf-yaml\") pod \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\" (UID: \"0fa2fd43-5704-4d6b-bb77-92b551fbd960\") " Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.605821 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.606646 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.607678 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.607706 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0fa2fd43-5704-4d6b-bb77-92b551fbd960-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.618055 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-scripts" (OuterVolumeSpecName: "scripts") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.618164 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa2fd43-5704-4d6b-bb77-92b551fbd960-kube-api-access-cnfx7" (OuterVolumeSpecName: "kube-api-access-cnfx7") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "kube-api-access-cnfx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.642678 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.710461 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.710511 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.710526 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnfx7\" (UniqueName: \"kubernetes.io/projected/0fa2fd43-5704-4d6b-bb77-92b551fbd960-kube-api-access-cnfx7\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.722260 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.750611 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-config-data" (OuterVolumeSpecName: "config-data") pod "0fa2fd43-5704-4d6b-bb77-92b551fbd960" (UID: "0fa2fd43-5704-4d6b-bb77-92b551fbd960"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.813404 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.813458 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fa2fd43-5704-4d6b-bb77-92b551fbd960-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.879853 4904 generic.go:334] "Generic (PLEG): container finished" podID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerID="a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1" exitCode=0 Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.879928 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerDied","Data":"a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1"} Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.879983 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0fa2fd43-5704-4d6b-bb77-92b551fbd960","Type":"ContainerDied","Data":"fd8255bfe778162ed728f4531da11e39c8ec2686ec9455250963dbf1352bfd21"} Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.880005 4904 scope.go:117] "RemoveContainer" containerID="fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.880296 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.940802 4904 scope.go:117] "RemoveContainer" containerID="1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9" Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.942926 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.976327 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:37 crc kubenswrapper[4904]: I1121 13:58:37.992797 4904 scope.go:117] "RemoveContainer" containerID="d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.003263 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.003911 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-notification-agent" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.003940 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-notification-agent" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.003980 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-central-agent" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.003989 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-central-agent" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.004027 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="proxy-httpd" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.004035 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="proxy-httpd" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.004050 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="sg-core" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.004057 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="sg-core" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.004291 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="sg-core" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.004324 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-central-agent" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.004340 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="proxy-httpd" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.004362 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" containerName="ceilometer-notification-agent" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.007830 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.011641 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.011673 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.028504 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.048695 4904 scope.go:117] "RemoveContainer" containerID="a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.120783 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-run-httpd\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.120847 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.120915 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-log-httpd\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.121019 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqxr\" (UniqueName: \"kubernetes.io/projected/5149390b-82c0-4793-a607-d245ecc83257-kube-api-access-xsqxr\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.121047 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.121069 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-config-data\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.121089 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-scripts\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.124051 4904 scope.go:117] "RemoveContainer" containerID="fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.124542 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4\": container with ID starting with fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4 not found: ID does not exist" containerID="fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.124579 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4"} err="failed to get container status \"fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4\": rpc error: code = NotFound desc = could not find container \"fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4\": container with ID starting with fbabdf1dff311e0c149ac55574b8ac38ff3cf860bb4aafb98c58312de310afb4 not found: ID does not exist" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.124870 4904 scope.go:117] "RemoveContainer" containerID="1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.125259 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9\": container with ID starting with 1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9 not found: ID does not exist" containerID="1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.125301 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9"} err="failed to get container status \"1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9\": rpc error: code = NotFound desc = could not find container \"1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9\": container with ID starting with 1a27343b604460e457dff22e7973400d8f92ea112664c2de5389fa0a293be6a9 not found: ID does not exist" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.125330 4904 scope.go:117] "RemoveContainer" containerID="d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.125642 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4\": container with ID starting with d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4 not found: ID does not exist" containerID="d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.125688 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4"} err="failed to get container status \"d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4\": rpc error: code = NotFound desc = could not find container \"d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4\": container with ID starting with d1f59e8b44e4293c46bb65de97edd34f684da3906f112466dcdc2d16364c73b4 not found: ID does not exist" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.125706 4904 scope.go:117] "RemoveContainer" containerID="a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1" Nov 21 13:58:38 crc kubenswrapper[4904]: E1121 13:58:38.126115 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1\": container with ID starting with a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1 not found: ID does not exist" containerID="a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.126143 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1"} err="failed to get container status \"a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1\": rpc error: code = NotFound desc = could not find container \"a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1\": container with ID starting with a8ebec428dd6f2acb5e955876dbdccb5215719a1c2a302020892532241d5b3d1 not found: ID does not exist" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.223214 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsqxr\" (UniqueName: \"kubernetes.io/projected/5149390b-82c0-4793-a607-d245ecc83257-kube-api-access-xsqxr\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.223585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.223723 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-config-data\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.223891 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-scripts\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.224104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-run-httpd\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.224253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.224457 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-log-httpd\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.225285 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-log-httpd\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.225692 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-run-httpd\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.232770 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.235888 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.240314 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-config-data\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.243376 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.261321 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsqxr\" (UniqueName: \"kubernetes.io/projected/5149390b-82c0-4793-a607-d245ecc83257-kube-api-access-xsqxr\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.262342 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-scripts\") pod \"ceilometer-0\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.414674 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:58:38 crc kubenswrapper[4904]: I1121 13:58:38.565611 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa2fd43-5704-4d6b-bb77-92b551fbd960" path="/var/lib/kubelet/pods/0fa2fd43-5704-4d6b-bb77-92b551fbd960/volumes" Nov 21 13:58:39 crc kubenswrapper[4904]: I1121 13:58:38.942117 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:39 crc kubenswrapper[4904]: W1121 13:58:38.949187 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5149390b_82c0_4793_a607_d245ecc83257.slice/crio-d70ec211dd3732aa404ed8a4a55661647e0b26d4a686736133643138e379cc94 WatchSource:0}: Error finding container d70ec211dd3732aa404ed8a4a55661647e0b26d4a686736133643138e379cc94: Status 404 returned error can't find the container with id d70ec211dd3732aa404ed8a4a55661647e0b26d4a686736133643138e379cc94 Nov 21 13:58:39 crc kubenswrapper[4904]: I1121 13:58:38.953019 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 13:58:39 crc kubenswrapper[4904]: I1121 13:58:39.906362 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerStarted","Data":"68276a88ab1cbf53f2e5e847c38463e72c56580083847856e1de7a512229d184"} Nov 21 13:58:39 crc kubenswrapper[4904]: I1121 13:58:39.906917 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerStarted","Data":"d70ec211dd3732aa404ed8a4a55661647e0b26d4a686736133643138e379cc94"} Nov 21 13:58:40 crc kubenswrapper[4904]: I1121 13:58:40.516675 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:58:40 crc kubenswrapper[4904]: E1121 13:58:40.517465 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:58:40 crc kubenswrapper[4904]: I1121 13:58:40.928190 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerStarted","Data":"a224efb8ab37c1224142358e785d2a4a2400d574da5e8dc4400a6b88fc1c2fc6"} Nov 21 13:58:41 crc kubenswrapper[4904]: I1121 13:58:41.944226 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerStarted","Data":"1cbfe16fd938c7af28f50418f9eaa4af9730706364975df79e4f444662f952e8"} Nov 21 13:58:42 crc kubenswrapper[4904]: I1121 13:58:42.960339 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerStarted","Data":"9900feedaf9707bb1effe6e009e394c73df19036b0a8919486eca5dcbf7ba8e7"} Nov 21 13:58:42 crc kubenswrapper[4904]: I1121 13:58:42.963065 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:58:42 crc kubenswrapper[4904]: I1121 13:58:42.996222 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.360060041 podStartE2EDuration="5.996188987s" podCreationTimestamp="2025-11-21 13:58:37 +0000 UTC" firstStartedPulling="2025-11-21 13:58:38.95281736 +0000 UTC m=+1593.074349912" lastFinishedPulling="2025-11-21 13:58:42.588946286 +0000 UTC m=+1596.710478858" observedRunningTime="2025-11-21 13:58:42.984463821 +0000 UTC m=+1597.105996383" watchObservedRunningTime="2025-11-21 13:58:42.996188987 +0000 UTC m=+1597.117721589" Nov 21 13:58:43 crc kubenswrapper[4904]: I1121 13:58:43.232793 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 13:58:43 crc kubenswrapper[4904]: I1121 13:58:43.271259 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 13:58:43 crc kubenswrapper[4904]: I1121 13:58:43.484065 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 13:58:43 crc kubenswrapper[4904]: I1121 13:58:43.484162 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 13:58:44 crc kubenswrapper[4904]: I1121 13:58:44.038613 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 13:58:44 crc kubenswrapper[4904]: I1121 13:58:44.567103 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:58:44 crc kubenswrapper[4904]: I1121 13:58:44.567283 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.756899 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.774573 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.846936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdrs9\" (UniqueName: \"kubernetes.io/projected/04b80661-562e-4a69-ba70-15c22a4d6ece-kube-api-access-xdrs9\") pod \"04b80661-562e-4a69-ba70-15c22a4d6ece\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.847046 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-config-data\") pod \"6a18bd76-074c-4265-afc7-24155421e2ff\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.847109 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a18bd76-074c-4265-afc7-24155421e2ff-logs\") pod \"6a18bd76-074c-4265-afc7-24155421e2ff\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.847207 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-combined-ca-bundle\") pod \"6a18bd76-074c-4265-afc7-24155421e2ff\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.847242 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-combined-ca-bundle\") pod \"04b80661-562e-4a69-ba70-15c22a4d6ece\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.847329 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-config-data\") pod \"04b80661-562e-4a69-ba70-15c22a4d6ece\" (UID: \"04b80661-562e-4a69-ba70-15c22a4d6ece\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.847383 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7scm8\" (UniqueName: \"kubernetes.io/projected/6a18bd76-074c-4265-afc7-24155421e2ff-kube-api-access-7scm8\") pod \"6a18bd76-074c-4265-afc7-24155421e2ff\" (UID: \"6a18bd76-074c-4265-afc7-24155421e2ff\") " Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.848026 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a18bd76-074c-4265-afc7-24155421e2ff-logs" (OuterVolumeSpecName: "logs") pod "6a18bd76-074c-4265-afc7-24155421e2ff" (UID: "6a18bd76-074c-4265-afc7-24155421e2ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.875294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a18bd76-074c-4265-afc7-24155421e2ff-kube-api-access-7scm8" (OuterVolumeSpecName: "kube-api-access-7scm8") pod "6a18bd76-074c-4265-afc7-24155421e2ff" (UID: "6a18bd76-074c-4265-afc7-24155421e2ff"). InnerVolumeSpecName "kube-api-access-7scm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.875437 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b80661-562e-4a69-ba70-15c22a4d6ece-kube-api-access-xdrs9" (OuterVolumeSpecName: "kube-api-access-xdrs9") pod "04b80661-562e-4a69-ba70-15c22a4d6ece" (UID: "04b80661-562e-4a69-ba70-15c22a4d6ece"). InnerVolumeSpecName "kube-api-access-xdrs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.886476 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a18bd76-074c-4265-afc7-24155421e2ff" (UID: "6a18bd76-074c-4265-afc7-24155421e2ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.896998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04b80661-562e-4a69-ba70-15c22a4d6ece" (UID: "04b80661-562e-4a69-ba70-15c22a4d6ece"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.903812 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-config-data" (OuterVolumeSpecName: "config-data") pod "04b80661-562e-4a69-ba70-15c22a4d6ece" (UID: "04b80661-562e-4a69-ba70-15c22a4d6ece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.929924 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-config-data" (OuterVolumeSpecName: "config-data") pod "6a18bd76-074c-4265-afc7-24155421e2ff" (UID: "6a18bd76-074c-4265-afc7-24155421e2ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950066 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950124 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a18bd76-074c-4265-afc7-24155421e2ff-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950138 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a18bd76-074c-4265-afc7-24155421e2ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950155 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950168 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b80661-562e-4a69-ba70-15c22a4d6ece-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950183 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7scm8\" (UniqueName: \"kubernetes.io/projected/6a18bd76-074c-4265-afc7-24155421e2ff-kube-api-access-7scm8\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:45 crc kubenswrapper[4904]: I1121 13:58:45.950198 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdrs9\" (UniqueName: \"kubernetes.io/projected/04b80661-562e-4a69-ba70-15c22a4d6ece-kube-api-access-xdrs9\") on node \"crc\" DevicePath \"\"" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.007067 4904 generic.go:334] "Generic (PLEG): container finished" podID="6a18bd76-074c-4265-afc7-24155421e2ff" containerID="381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2" exitCode=137 Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.007179 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.007240 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a18bd76-074c-4265-afc7-24155421e2ff","Type":"ContainerDied","Data":"381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2"} Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.007848 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a18bd76-074c-4265-afc7-24155421e2ff","Type":"ContainerDied","Data":"d54a2d02c4d67ab32e049e3e342ec0cba38874977f1101a6d8a68380eecc7b93"} Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.007880 4904 scope.go:117] "RemoveContainer" containerID="381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.013223 4904 generic.go:334] "Generic (PLEG): container finished" podID="04b80661-562e-4a69-ba70-15c22a4d6ece" containerID="d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc" exitCode=137 Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.013277 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04b80661-562e-4a69-ba70-15c22a4d6ece","Type":"ContainerDied","Data":"d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc"} Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.013331 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"04b80661-562e-4a69-ba70-15c22a4d6ece","Type":"ContainerDied","Data":"b452d54fdc2d10553cb50d8e902ebe976cff5025e0fc3d30cb408dcf10b0a870"} Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.013508 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.038785 4904 scope.go:117] "RemoveContainer" containerID="ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.095423 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.097862 4904 scope.go:117] "RemoveContainer" containerID="381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2" Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.099453 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2\": container with ID starting with 381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2 not found: ID does not exist" containerID="381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.099504 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2"} err="failed to get container status \"381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2\": rpc error: code = NotFound desc = could not find container \"381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2\": container with ID starting with 381bf5325530a921d4c75f007de9a4a1dbc2e4b4d6817ebf5e329c22edfc46b2 not found: ID does not exist" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.099545 4904 scope.go:117] "RemoveContainer" containerID="ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8" Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.101039 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8\": container with ID starting with ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8 not found: ID does not exist" containerID="ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.101060 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8"} err="failed to get container status \"ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8\": rpc error: code = NotFound desc = could not find container \"ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8\": container with ID starting with ebe8323bf2f13f03d76a3cd557c684c7180a67e07885f502a43b3064c66e5fa8 not found: ID does not exist" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.101081 4904 scope.go:117] "RemoveContainer" containerID="d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.112907 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.127674 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.138544 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.139854 4904 scope.go:117] "RemoveContainer" containerID="d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc" Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.140369 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc\": container with ID starting with d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc not found: ID does not exist" containerID="d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.140453 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc"} err="failed to get container status \"d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc\": rpc error: code = NotFound desc = could not find container \"d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc\": container with ID starting with d1a206fffb443276292905892d17ccdae356897103db590f4713b7e91b17f3bc not found: ID does not exist" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.157555 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.158346 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b80661-562e-4a69-ba70-15c22a4d6ece" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.158373 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b80661-562e-4a69-ba70-15c22a4d6ece" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.158409 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-metadata" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.158420 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-metadata" Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.158446 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-log" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.158455 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-log" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.158877 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-log" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.158896 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" containerName="nova-metadata-metadata" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.158911 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b80661-562e-4a69-ba70-15c22a4d6ece" containerName="nova-cell1-novncproxy-novncproxy" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.160413 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.163683 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.164427 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.213409 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.216046 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.220371 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.220716 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.220860 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.236634 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.249930 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.256160 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.256248 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.256429 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480918e8-d31c-4ef0-a875-8386e2709cd2-logs\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.256468 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rmw\" (UniqueName: \"kubernetes.io/projected/480918e8-d31c-4ef0-a875-8386e2709cd2-kube-api-access-d4rmw\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.256580 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-config-data\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: E1121 13:58:46.296000 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a18bd76_074c_4265_afc7_24155421e2ff.slice/crio-d54a2d02c4d67ab32e049e3e342ec0cba38874977f1101a6d8a68380eecc7b93\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a18bd76_074c_4265_afc7_24155421e2ff.slice\": RecentStats: unable to find data in memory cache]" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.359438 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.359530 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.359612 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjg6j\" (UniqueName: \"kubernetes.io/projected/72657d59-241e-443a-9a53-f9c794b67958-kube-api-access-gjg6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.359643 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.359696 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480918e8-d31c-4ef0-a875-8386e2709cd2-logs\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.359734 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.360059 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4rmw\" (UniqueName: \"kubernetes.io/projected/480918e8-d31c-4ef0-a875-8386e2709cd2-kube-api-access-d4rmw\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.360284 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-config-data\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.360493 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.360714 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.361747 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480918e8-d31c-4ef0-a875-8386e2709cd2-logs\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.367190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-config-data\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.367728 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.372617 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.383104 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4rmw\" (UniqueName: \"kubernetes.io/projected/480918e8-d31c-4ef0-a875-8386e2709cd2-kube-api-access-d4rmw\") pod \"nova-metadata-0\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.467477 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.467762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjg6j\" (UniqueName: \"kubernetes.io/projected/72657d59-241e-443a-9a53-f9c794b67958-kube-api-access-gjg6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.471715 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.473957 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.475116 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.475632 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.478166 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.479797 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.486160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/72657d59-241e-443a-9a53-f9c794b67958-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.489479 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjg6j\" (UniqueName: \"kubernetes.io/projected/72657d59-241e-443a-9a53-f9c794b67958-kube-api-access-gjg6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"72657d59-241e-443a-9a53-f9c794b67958\") " pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.502468 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.561795 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b80661-562e-4a69-ba70-15c22a4d6ece" path="/var/lib/kubelet/pods/04b80661-562e-4a69-ba70-15c22a4d6ece/volumes" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.562635 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a18bd76-074c-4265-afc7-24155421e2ff" path="/var/lib/kubelet/pods/6a18bd76-074c-4265-afc7-24155421e2ff/volumes" Nov 21 13:58:46 crc kubenswrapper[4904]: I1121 13:58:46.586815 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:47 crc kubenswrapper[4904]: I1121 13:58:47.052758 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:58:47 crc kubenswrapper[4904]: W1121 13:58:47.054209 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod480918e8_d31c_4ef0_a875_8386e2709cd2.slice/crio-00d81d5624ef1811cf34da4cb39efec2728758e63bc3be0906f22cc43126f52a WatchSource:0}: Error finding container 00d81d5624ef1811cf34da4cb39efec2728758e63bc3be0906f22cc43126f52a: Status 404 returned error can't find the container with id 00d81d5624ef1811cf34da4cb39efec2728758e63bc3be0906f22cc43126f52a Nov 21 13:58:47 crc kubenswrapper[4904]: W1121 13:58:47.205545 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72657d59_241e_443a_9a53_f9c794b67958.slice/crio-a3de3d350eef869f7aabdfa6da5caa5ba5bf7b557d4e3f5fd7e8aa972edd7eb7 WatchSource:0}: Error finding container a3de3d350eef869f7aabdfa6da5caa5ba5bf7b557d4e3f5fd7e8aa972edd7eb7: Status 404 returned error can't find the container with id a3de3d350eef869f7aabdfa6da5caa5ba5bf7b557d4e3f5fd7e8aa972edd7eb7 Nov 21 13:58:47 crc kubenswrapper[4904]: I1121 13:58:47.213983 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.051003 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"72657d59-241e-443a-9a53-f9c794b67958","Type":"ContainerStarted","Data":"c5aac42388d73a666103daacd5d027a1377af939fb278ec5c9fbdc6d02319901"} Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.051480 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"72657d59-241e-443a-9a53-f9c794b67958","Type":"ContainerStarted","Data":"a3de3d350eef869f7aabdfa6da5caa5ba5bf7b557d4e3f5fd7e8aa972edd7eb7"} Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.056948 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480918e8-d31c-4ef0-a875-8386e2709cd2","Type":"ContainerStarted","Data":"8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185"} Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.057010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480918e8-d31c-4ef0-a875-8386e2709cd2","Type":"ContainerStarted","Data":"c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3"} Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.057022 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480918e8-d31c-4ef0-a875-8386e2709cd2","Type":"ContainerStarted","Data":"00d81d5624ef1811cf34da4cb39efec2728758e63bc3be0906f22cc43126f52a"} Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.089414 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.08938489 podStartE2EDuration="2.08938489s" podCreationTimestamp="2025-11-21 13:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:48.07297128 +0000 UTC m=+1602.194503842" watchObservedRunningTime="2025-11-21 13:58:48.08938489 +0000 UTC m=+1602.210917442" Nov 21 13:58:48 crc kubenswrapper[4904]: I1121 13:58:48.103528 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.103489583 podStartE2EDuration="2.103489583s" podCreationTimestamp="2025-11-21 13:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:48.092218849 +0000 UTC m=+1602.213751411" watchObservedRunningTime="2025-11-21 13:58:48.103489583 +0000 UTC m=+1602.225022145" Nov 21 13:58:51 crc kubenswrapper[4904]: I1121 13:58:51.503308 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 13:58:51 crc kubenswrapper[4904]: I1121 13:58:51.503915 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 13:58:51 crc kubenswrapper[4904]: I1121 13:58:51.588855 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:53 crc kubenswrapper[4904]: I1121 13:58:53.491363 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 13:58:53 crc kubenswrapper[4904]: I1121 13:58:53.492220 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 13:58:53 crc kubenswrapper[4904]: I1121 13:58:53.492384 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 13:58:53 crc kubenswrapper[4904]: I1121 13:58:53.495663 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.131375 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.138948 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.360305 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-hchfr"] Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.365205 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.383531 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-hchfr"] Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.512686 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rfh\" (UniqueName: \"kubernetes.io/projected/b45512b9-5eee-49ce-b2f4-f70fee0312d2-kube-api-access-76rfh\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.513166 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.513248 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-config\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.517074 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.517217 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.517414 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.620096 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-config\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.620177 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.620226 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.620276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.620378 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rfh\" (UniqueName: \"kubernetes.io/projected/b45512b9-5eee-49ce-b2f4-f70fee0312d2-kube-api-access-76rfh\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.620421 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.621373 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-config\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.621382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.621516 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.621619 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.622234 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.642547 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rfh\" (UniqueName: \"kubernetes.io/projected/b45512b9-5eee-49ce-b2f4-f70fee0312d2-kube-api-access-76rfh\") pod \"dnsmasq-dns-6b7bbf7cf9-hchfr\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:54 crc kubenswrapper[4904]: I1121 13:58:54.721601 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:55 crc kubenswrapper[4904]: I1121 13:58:55.322457 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-hchfr"] Nov 21 13:58:55 crc kubenswrapper[4904]: I1121 13:58:55.514299 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:58:55 crc kubenswrapper[4904]: E1121 13:58:55.515068 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.186617 4904 generic.go:334] "Generic (PLEG): container finished" podID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerID="49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa" exitCode=0 Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.187689 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" event={"ID":"b45512b9-5eee-49ce-b2f4-f70fee0312d2","Type":"ContainerDied","Data":"49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa"} Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.187740 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" event={"ID":"b45512b9-5eee-49ce-b2f4-f70fee0312d2","Type":"ContainerStarted","Data":"38f61bf7c0cf678f0cf197f3a651ef5b52c90f33c452f2585f14c84bc9c72733"} Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.502941 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.502993 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.588815 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.631257 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.870901 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.931995 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.932390 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-central-agent" containerID="cri-o://68276a88ab1cbf53f2e5e847c38463e72c56580083847856e1de7a512229d184" gracePeriod=30 Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.932586 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="proxy-httpd" containerID="cri-o://9900feedaf9707bb1effe6e009e394c73df19036b0a8919486eca5dcbf7ba8e7" gracePeriod=30 Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.932526 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="sg-core" containerID="cri-o://1cbfe16fd938c7af28f50418f9eaa4af9730706364975df79e4f444662f952e8" gracePeriod=30 Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.932579 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-notification-agent" containerID="cri-o://a224efb8ab37c1224142358e785d2a4a2400d574da5e8dc4400a6b88fc1c2fc6" gracePeriod=30 Nov 21 13:58:56 crc kubenswrapper[4904]: I1121 13:58:56.956132 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.203143 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" event={"ID":"b45512b9-5eee-49ce-b2f4-f70fee0312d2","Type":"ContainerStarted","Data":"14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b"} Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.203732 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.205784 4904 generic.go:334] "Generic (PLEG): container finished" podID="5149390b-82c0-4793-a607-d245ecc83257" containerID="9900feedaf9707bb1effe6e009e394c73df19036b0a8919486eca5dcbf7ba8e7" exitCode=0 Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.205828 4904 generic.go:334] "Generic (PLEG): container finished" podID="5149390b-82c0-4793-a607-d245ecc83257" containerID="1cbfe16fd938c7af28f50418f9eaa4af9730706364975df79e4f444662f952e8" exitCode=2 Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.205854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerDied","Data":"9900feedaf9707bb1effe6e009e394c73df19036b0a8919486eca5dcbf7ba8e7"} Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.205881 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerDied","Data":"1cbfe16fd938c7af28f50418f9eaa4af9730706364975df79e4f444662f952e8"} Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.206521 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-log" containerID="cri-o://485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157" gracePeriod=30 Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.206691 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-api" containerID="cri-o://af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5" gracePeriod=30 Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.228244 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.276721 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" podStartSLOduration=3.276690267 podStartE2EDuration="3.276690267s" podCreationTimestamp="2025-11-21 13:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:57.239466659 +0000 UTC m=+1611.360999201" watchObservedRunningTime="2025-11-21 13:58:57.276690267 +0000 UTC m=+1611.398222839" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.444182 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-p8zmt"] Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.446118 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.449440 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.449634 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.468881 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8zmt"] Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.512713 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.520891 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.602634 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-scripts\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.602764 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.602796 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrbc\" (UniqueName: \"kubernetes.io/projected/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-kube-api-access-tsrbc\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.602908 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-config-data\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.710466 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.710528 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrbc\" (UniqueName: \"kubernetes.io/projected/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-kube-api-access-tsrbc\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.710716 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-config-data\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.710942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-scripts\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.719704 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-scripts\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.719884 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-config-data\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.735421 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrbc\" (UniqueName: \"kubernetes.io/projected/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-kube-api-access-tsrbc\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.751951 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-p8zmt\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:57 crc kubenswrapper[4904]: I1121 13:58:57.802808 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:58:58 crc kubenswrapper[4904]: I1121 13:58:58.220268 4904 generic.go:334] "Generic (PLEG): container finished" podID="5149390b-82c0-4793-a607-d245ecc83257" containerID="68276a88ab1cbf53f2e5e847c38463e72c56580083847856e1de7a512229d184" exitCode=0 Nov 21 13:58:58 crc kubenswrapper[4904]: I1121 13:58:58.220329 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerDied","Data":"68276a88ab1cbf53f2e5e847c38463e72c56580083847856e1de7a512229d184"} Nov 21 13:58:58 crc kubenswrapper[4904]: I1121 13:58:58.223519 4904 generic.go:334] "Generic (PLEG): container finished" podID="313456af-18a0-4248-adc2-3480a25d2e3f" containerID="485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157" exitCode=143 Nov 21 13:58:58 crc kubenswrapper[4904]: I1121 13:58:58.223601 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313456af-18a0-4248-adc2-3480a25d2e3f","Type":"ContainerDied","Data":"485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157"} Nov 21 13:58:58 crc kubenswrapper[4904]: I1121 13:58:58.370618 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8zmt"] Nov 21 13:58:58 crc kubenswrapper[4904]: W1121 13:58:58.380952 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38e2ed93_b0dc_4698_a3e7_415f090e9ab2.slice/crio-0655f63261d3d9b92a0cdc142b549a61d559956d842e16d5d2c4e955a3f7971a WatchSource:0}: Error finding container 0655f63261d3d9b92a0cdc142b549a61d559956d842e16d5d2c4e955a3f7971a: Status 404 returned error can't find the container with id 0655f63261d3d9b92a0cdc142b549a61d559956d842e16d5d2c4e955a3f7971a Nov 21 13:58:59 crc kubenswrapper[4904]: I1121 13:58:59.241403 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8zmt" event={"ID":"38e2ed93-b0dc-4698-a3e7-415f090e9ab2","Type":"ContainerStarted","Data":"ab7380cc299ba5249a2f3e93358da7fc8cf113dab4b1425381172f7f802ff5e8"} Nov 21 13:58:59 crc kubenswrapper[4904]: I1121 13:58:59.241763 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8zmt" event={"ID":"38e2ed93-b0dc-4698-a3e7-415f090e9ab2","Type":"ContainerStarted","Data":"0655f63261d3d9b92a0cdc142b549a61d559956d842e16d5d2c4e955a3f7971a"} Nov 21 13:58:59 crc kubenswrapper[4904]: I1121 13:58:59.261507 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-p8zmt" podStartSLOduration=2.261483761 podStartE2EDuration="2.261483761s" podCreationTimestamp="2025-11-21 13:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:58:59.261009549 +0000 UTC m=+1613.382542111" watchObservedRunningTime="2025-11-21 13:58:59.261483761 +0000 UTC m=+1613.383016323" Nov 21 13:59:00 crc kubenswrapper[4904]: I1121 13:59:00.913021 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:00 crc kubenswrapper[4904]: I1121 13:59:00.993389 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-combined-ca-bundle\") pod \"313456af-18a0-4248-adc2-3480a25d2e3f\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " Nov 21 13:59:00 crc kubenswrapper[4904]: I1121 13:59:00.993541 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-config-data\") pod \"313456af-18a0-4248-adc2-3480a25d2e3f\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.041765 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "313456af-18a0-4248-adc2-3480a25d2e3f" (UID: "313456af-18a0-4248-adc2-3480a25d2e3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.048896 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-config-data" (OuterVolumeSpecName: "config-data") pod "313456af-18a0-4248-adc2-3480a25d2e3f" (UID: "313456af-18a0-4248-adc2-3480a25d2e3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.096389 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313456af-18a0-4248-adc2-3480a25d2e3f-logs\") pod \"313456af-18a0-4248-adc2-3480a25d2e3f\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.096726 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xn6t\" (UniqueName: \"kubernetes.io/projected/313456af-18a0-4248-adc2-3480a25d2e3f-kube-api-access-4xn6t\") pod \"313456af-18a0-4248-adc2-3480a25d2e3f\" (UID: \"313456af-18a0-4248-adc2-3480a25d2e3f\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.097488 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.097572 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/313456af-18a0-4248-adc2-3480a25d2e3f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.098726 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313456af-18a0-4248-adc2-3480a25d2e3f-logs" (OuterVolumeSpecName: "logs") pod "313456af-18a0-4248-adc2-3480a25d2e3f" (UID: "313456af-18a0-4248-adc2-3480a25d2e3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.107479 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313456af-18a0-4248-adc2-3480a25d2e3f-kube-api-access-4xn6t" (OuterVolumeSpecName: "kube-api-access-4xn6t") pod "313456af-18a0-4248-adc2-3480a25d2e3f" (UID: "313456af-18a0-4248-adc2-3480a25d2e3f"). InnerVolumeSpecName "kube-api-access-4xn6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.182412 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.200326 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/313456af-18a0-4248-adc2-3480a25d2e3f-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.200363 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xn6t\" (UniqueName: \"kubernetes.io/projected/313456af-18a0-4248-adc2-3480a25d2e3f-kube-api-access-4xn6t\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.269621 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerID="471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f" exitCode=137 Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.269971 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerDied","Data":"471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f"} Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.270011 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da","Type":"ContainerDied","Data":"94a528bac75a88f87e4c56ce0c443267a27090349d89f229b4b16b42c75e7c0f"} Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.270032 4904 scope.go:117] "RemoveContainer" containerID="471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.270182 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.274217 4904 generic.go:334] "Generic (PLEG): container finished" podID="313456af-18a0-4248-adc2-3480a25d2e3f" containerID="af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5" exitCode=0 Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.274245 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313456af-18a0-4248-adc2-3480a25d2e3f","Type":"ContainerDied","Data":"af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5"} Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.274263 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"313456af-18a0-4248-adc2-3480a25d2e3f","Type":"ContainerDied","Data":"4005a08bfda36f99feaccf27ef77ddeea35f522bf44439d2b475b2cfeca6465b"} Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.274308 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.301398 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmjrh\" (UniqueName: \"kubernetes.io/projected/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-kube-api-access-dmjrh\") pod \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.301474 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-scripts\") pod \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.301726 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-config-data\") pod \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.302525 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-combined-ca-bundle\") pod \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\" (UID: \"9d2a8d2c-7abe-41bf-ab48-48bfd5a703da\") " Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.310153 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-scripts" (OuterVolumeSpecName: "scripts") pod "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" (UID: "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.314277 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-kube-api-access-dmjrh" (OuterVolumeSpecName: "kube-api-access-dmjrh") pod "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" (UID: "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da"). InnerVolumeSpecName "kube-api-access-dmjrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.327100 4904 scope.go:117] "RemoveContainer" containerID="af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.355133 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.356948 4904 scope.go:117] "RemoveContainer" containerID="1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.382297 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.392275 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.393193 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-api" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.393286 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-api" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.393558 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-api" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.394103 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-api" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.394203 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-evaluator" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.394272 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-evaluator" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.394350 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-notifier" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.394405 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-notifier" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.394573 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-log" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.394633 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-log" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.394720 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-listener" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.394780 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-listener" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.395160 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-api" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.395253 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-notifier" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.395326 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-evaluator" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.395394 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-api" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.395464 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" containerName="nova-api-log" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.395532 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" containerName="aodh-listener" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.403614 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.403789 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.405419 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.405457 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmjrh\" (UniqueName: \"kubernetes.io/projected/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-kube-api-access-dmjrh\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.411841 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.413237 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.425201 4904 scope.go:117] "RemoveContainer" containerID="91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.453528 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.509293 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-config-data\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.509382 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-public-tls-certs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.509505 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hqh\" (UniqueName: \"kubernetes.io/projected/1ee512ba-128b-4864-ac07-391b6e73ecc4-kube-api-access-j6hqh\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.509543 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.509773 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee512ba-128b-4864-ac07-391b6e73ecc4-logs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.509795 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.517834 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-config-data" (OuterVolumeSpecName: "config-data") pod "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" (UID: "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.547703 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" (UID: "9d2a8d2c-7abe-41bf-ab48-48bfd5a703da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.611813 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6hqh\" (UniqueName: \"kubernetes.io/projected/1ee512ba-128b-4864-ac07-391b6e73ecc4-kube-api-access-j6hqh\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.611904 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.611997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee512ba-128b-4864-ac07-391b6e73ecc4-logs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.612025 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.612150 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-config-data\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.612213 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-public-tls-certs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.612285 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.612295 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.614909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee512ba-128b-4864-ac07-391b6e73ecc4-logs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.618624 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-config-data\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.619607 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.619811 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-public-tls-certs\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.621075 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.635383 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6hqh\" (UniqueName: \"kubernetes.io/projected/1ee512ba-128b-4864-ac07-391b6e73ecc4-kube-api-access-j6hqh\") pod \"nova-api-0\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.713546 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.724007 4904 scope.go:117] "RemoveContainer" containerID="471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.724973 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f\": container with ID starting with 471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f not found: ID does not exist" containerID="471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.725056 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f"} err="failed to get container status \"471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f\": rpc error: code = NotFound desc = could not find container \"471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f\": container with ID starting with 471c8929959b2103c1e5b5fce7ba1a411364bca31f593b55c2fb30b51251c40f not found: ID does not exist" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.725113 4904 scope.go:117] "RemoveContainer" containerID="af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.726073 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9\": container with ID starting with af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9 not found: ID does not exist" containerID="af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.726101 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9"} err="failed to get container status \"af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9\": rpc error: code = NotFound desc = could not find container \"af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9\": container with ID starting with af1e3d4f55b5f52ae122241f8dc67fde657a47ba9e1f273f5fafbbc09e4ff8b9 not found: ID does not exist" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.726141 4904 scope.go:117] "RemoveContainer" containerID="1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.727877 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5\": container with ID starting with 1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5 not found: ID does not exist" containerID="1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.727914 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5"} err="failed to get container status \"1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5\": rpc error: code = NotFound desc = could not find container \"1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5\": container with ID starting with 1a13448257076be1e67a96540699acc76cd50f1132c33356b1b9a2174e81c4d5 not found: ID does not exist" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.727967 4904 scope.go:117] "RemoveContainer" containerID="91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.729616 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9\": container with ID starting with 91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9 not found: ID does not exist" containerID="91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.729723 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9"} err="failed to get container status \"91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9\": rpc error: code = NotFound desc = could not find container \"91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9\": container with ID starting with 91684220d17c1acb88b3bc63400c7268920e1077afa46af513376f0e866ecbc9 not found: ID does not exist" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.729760 4904 scope.go:117] "RemoveContainer" containerID="af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.734167 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.768494 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.778931 4904 scope.go:117] "RemoveContainer" containerID="485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.782309 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.785485 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.789467 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.789845 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.790027 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.790497 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lqfk6" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.792556 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.793056 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.833961 4904 scope.go:117] "RemoveContainer" containerID="af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.835721 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5\": container with ID starting with af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5 not found: ID does not exist" containerID="af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.835773 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5"} err="failed to get container status \"af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5\": rpc error: code = NotFound desc = could not find container \"af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5\": container with ID starting with af105b57c7c26825c4cd11ea60036a3eda77c0f64b312236bbbf2a40b16e1ff5 not found: ID does not exist" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.835799 4904 scope.go:117] "RemoveContainer" containerID="485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157" Nov 21 13:59:01 crc kubenswrapper[4904]: E1121 13:59:01.836370 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157\": container with ID starting with 485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157 not found: ID does not exist" containerID="485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.836425 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157"} err="failed to get container status \"485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157\": rpc error: code = NotFound desc = could not find container \"485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157\": container with ID starting with 485b98068476cd19e2e7e507d65cdeb3eb38f8dad7c84a52f559a17887ba8157 not found: ID does not exist" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.921404 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7dpz\" (UniqueName: \"kubernetes.io/projected/06220aa0-c8a9-49c0-a1f6-f71022c409d6-kube-api-access-j7dpz\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.921492 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-config-data\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.921524 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-public-tls-certs\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.921541 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-internal-tls-certs\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.921567 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:01 crc kubenswrapper[4904]: I1121 13:59:01.921599 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-scripts\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.024582 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7dpz\" (UniqueName: \"kubernetes.io/projected/06220aa0-c8a9-49c0-a1f6-f71022c409d6-kube-api-access-j7dpz\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.024673 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-config-data\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.024719 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-public-tls-certs\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.024740 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-internal-tls-certs\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.024778 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.024808 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-scripts\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.031864 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-scripts\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.034400 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-internal-tls-certs\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.034521 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-public-tls-certs\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.039796 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-config-data\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.041778 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-combined-ca-bundle\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.054433 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7dpz\" (UniqueName: \"kubernetes.io/projected/06220aa0-c8a9-49c0-a1f6-f71022c409d6-kube-api-access-j7dpz\") pod \"aodh-0\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.123341 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.292797 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.315746 4904 generic.go:334] "Generic (PLEG): container finished" podID="5149390b-82c0-4793-a607-d245ecc83257" containerID="a224efb8ab37c1224142358e785d2a4a2400d574da5e8dc4400a6b88fc1c2fc6" exitCode=0 Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.315791 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerDied","Data":"a224efb8ab37c1224142358e785d2a4a2400d574da5e8dc4400a6b88fc1c2fc6"} Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.530298 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313456af-18a0-4248-adc2-3480a25d2e3f" path="/var/lib/kubelet/pods/313456af-18a0-4248-adc2-3480a25d2e3f/volumes" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.531639 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2a8d2c-7abe-41bf-ab48-48bfd5a703da" path="/var/lib/kubelet/pods/9d2a8d2c-7abe-41bf-ab48-48bfd5a703da/volumes" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.700836 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.879142 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.953929 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-log-httpd\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.954108 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-run-httpd\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.954143 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-combined-ca-bundle\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.954288 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsqxr\" (UniqueName: \"kubernetes.io/projected/5149390b-82c0-4793-a607-d245ecc83257-kube-api-access-xsqxr\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.954777 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.954864 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.956384 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-config-data\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.956416 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-scripts\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.956463 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-sg-core-conf-yaml\") pod \"5149390b-82c0-4793-a607-d245ecc83257\" (UID: \"5149390b-82c0-4793-a607-d245ecc83257\") " Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.957717 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.957750 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5149390b-82c0-4793-a607-d245ecc83257-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.963859 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-scripts" (OuterVolumeSpecName: "scripts") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.964158 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5149390b-82c0-4793-a607-d245ecc83257-kube-api-access-xsqxr" (OuterVolumeSpecName: "kube-api-access-xsqxr") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "kube-api-access-xsqxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:02 crc kubenswrapper[4904]: I1121 13:59:02.995025 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.059827 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsqxr\" (UniqueName: \"kubernetes.io/projected/5149390b-82c0-4793-a607-d245ecc83257-kube-api-access-xsqxr\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.059885 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.059974 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.086383 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.101786 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-config-data" (OuterVolumeSpecName: "config-data") pod "5149390b-82c0-4793-a607-d245ecc83257" (UID: "5149390b-82c0-4793-a607-d245ecc83257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.162446 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.162480 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5149390b-82c0-4793-a607-d245ecc83257-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.331291 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerStarted","Data":"b4ade0be00be751b675cdd28f3be014cc4f9985b1acc5286d05f4cb1676b638d"} Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.343737 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ee512ba-128b-4864-ac07-391b6e73ecc4","Type":"ContainerStarted","Data":"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83"} Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.343804 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ee512ba-128b-4864-ac07-391b6e73ecc4","Type":"ContainerStarted","Data":"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926"} Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.343819 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ee512ba-128b-4864-ac07-391b6e73ecc4","Type":"ContainerStarted","Data":"edc54dbac90988ba1389c8bffbbeabf5e6cd07a5b3bc8a8d34a92d5909a97a5e"} Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.349417 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5149390b-82c0-4793-a607-d245ecc83257","Type":"ContainerDied","Data":"d70ec211dd3732aa404ed8a4a55661647e0b26d4a686736133643138e379cc94"} Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.349479 4904 scope.go:117] "RemoveContainer" containerID="9900feedaf9707bb1effe6e009e394c73df19036b0a8919486eca5dcbf7ba8e7" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.349624 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.384649 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.3846187309999998 podStartE2EDuration="2.384618731s" podCreationTimestamp="2025-11-21 13:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:03.373046489 +0000 UTC m=+1617.494579051" watchObservedRunningTime="2025-11-21 13:59:03.384618731 +0000 UTC m=+1617.506151283" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.413732 4904 scope.go:117] "RemoveContainer" containerID="1cbfe16fd938c7af28f50418f9eaa4af9730706364975df79e4f444662f952e8" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.416384 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.429920 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.441198 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:03 crc kubenswrapper[4904]: E1121 13:59:03.441741 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="sg-core" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.441759 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="sg-core" Nov 21 13:59:03 crc kubenswrapper[4904]: E1121 13:59:03.441777 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="proxy-httpd" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.441785 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="proxy-httpd" Nov 21 13:59:03 crc kubenswrapper[4904]: E1121 13:59:03.441820 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-notification-agent" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.441829 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-notification-agent" Nov 21 13:59:03 crc kubenswrapper[4904]: E1121 13:59:03.441850 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-central-agent" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.441861 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-central-agent" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.442111 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-central-agent" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.442134 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="sg-core" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.442152 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="ceilometer-notification-agent" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.442166 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5149390b-82c0-4793-a607-d245ecc83257" containerName="proxy-httpd" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.444237 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.449035 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.449189 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.463758 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.551307 4904 scope.go:117] "RemoveContainer" containerID="a224efb8ab37c1224142358e785d2a4a2400d574da5e8dc4400a6b88fc1c2fc6" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.573197 4904 scope.go:117] "RemoveContainer" containerID="68276a88ab1cbf53f2e5e847c38463e72c56580083847856e1de7a512229d184" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.602002 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-scripts\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.602266 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.602539 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.602827 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjb2z\" (UniqueName: \"kubernetes.io/projected/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-kube-api-access-pjb2z\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.602975 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-config-data\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.603106 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-log-httpd\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.603178 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-run-httpd\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.704722 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.704817 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.704881 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjb2z\" (UniqueName: \"kubernetes.io/projected/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-kube-api-access-pjb2z\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.704928 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-config-data\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.704961 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-log-httpd\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.704989 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-run-httpd\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.705048 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-scripts\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.705416 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-log-httpd\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.706137 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-run-httpd\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.710019 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-scripts\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.710884 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.725056 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.730134 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-config-data\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.730418 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjb2z\" (UniqueName: \"kubernetes.io/projected/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-kube-api-access-pjb2z\") pod \"ceilometer-0\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " pod="openstack/ceilometer-0" Nov 21 13:59:03 crc kubenswrapper[4904]: I1121 13:59:03.840913 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.362502 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.370851 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerStarted","Data":"cf9cd5c6ae7ff150decc33c4aeb3a8dd86456f8f4e30272e0f281174544a55da"} Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.370900 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerStarted","Data":"a742b48696832c2016485f0b184f09de95c0419c776493851cb43530e0c7ce90"} Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.526948 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5149390b-82c0-4793-a607-d245ecc83257" path="/var/lib/kubelet/pods/5149390b-82c0-4793-a607-d245ecc83257/volumes" Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.723846 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.852914 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8rrf"] Nov 21 13:59:04 crc kubenswrapper[4904]: I1121 13:59:04.854590 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerName="dnsmasq-dns" containerID="cri-o://ced64c0214f8a24937656987fb3ae64059618624c3497fd118a15dfe39249512" gracePeriod=10 Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.414629 4904 generic.go:334] "Generic (PLEG): container finished" podID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerID="ced64c0214f8a24937656987fb3ae64059618624c3497fd118a15dfe39249512" exitCode=0 Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.414743 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" event={"ID":"b6b4562e-b84b-4bff-9680-51341eb3864c","Type":"ContainerDied","Data":"ced64c0214f8a24937656987fb3ae64059618624c3497fd118a15dfe39249512"} Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.417316 4904 generic.go:334] "Generic (PLEG): container finished" podID="38e2ed93-b0dc-4698-a3e7-415f090e9ab2" containerID="ab7380cc299ba5249a2f3e93358da7fc8cf113dab4b1425381172f7f802ff5e8" exitCode=0 Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.417386 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8zmt" event={"ID":"38e2ed93-b0dc-4698-a3e7-415f090e9ab2","Type":"ContainerDied","Data":"ab7380cc299ba5249a2f3e93358da7fc8cf113dab4b1425381172f7f802ff5e8"} Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.419940 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerStarted","Data":"52996f556100a56c896c4c229fbaf6a174b99256d9acbec5048925e4aaca5877"} Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.421022 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerStarted","Data":"b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7"} Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.421045 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerStarted","Data":"dd131677f450e6b6033b4173dc9ad176832bb8f4b878a5047ff3d8901d4a57b9"} Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.540839 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.651569 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-sb\") pod \"b6b4562e-b84b-4bff-9680-51341eb3864c\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.651694 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-nb\") pod \"b6b4562e-b84b-4bff-9680-51341eb3864c\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.651738 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-swift-storage-0\") pod \"b6b4562e-b84b-4bff-9680-51341eb3864c\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.651816 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmchw\" (UniqueName: \"kubernetes.io/projected/b6b4562e-b84b-4bff-9680-51341eb3864c-kube-api-access-pmchw\") pod \"b6b4562e-b84b-4bff-9680-51341eb3864c\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.651932 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-config\") pod \"b6b4562e-b84b-4bff-9680-51341eb3864c\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.652094 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-svc\") pod \"b6b4562e-b84b-4bff-9680-51341eb3864c\" (UID: \"b6b4562e-b84b-4bff-9680-51341eb3864c\") " Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.676865 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b4562e-b84b-4bff-9680-51341eb3864c-kube-api-access-pmchw" (OuterVolumeSpecName: "kube-api-access-pmchw") pod "b6b4562e-b84b-4bff-9680-51341eb3864c" (UID: "b6b4562e-b84b-4bff-9680-51341eb3864c"). InnerVolumeSpecName "kube-api-access-pmchw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.747064 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-config" (OuterVolumeSpecName: "config") pod "b6b4562e-b84b-4bff-9680-51341eb3864c" (UID: "b6b4562e-b84b-4bff-9680-51341eb3864c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.749551 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b6b4562e-b84b-4bff-9680-51341eb3864c" (UID: "b6b4562e-b84b-4bff-9680-51341eb3864c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.755073 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.755235 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmchw\" (UniqueName: \"kubernetes.io/projected/b6b4562e-b84b-4bff-9680-51341eb3864c-kube-api-access-pmchw\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.755293 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-config\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.759309 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b6b4562e-b84b-4bff-9680-51341eb3864c" (UID: "b6b4562e-b84b-4bff-9680-51341eb3864c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.763196 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b6b4562e-b84b-4bff-9680-51341eb3864c" (UID: "b6b4562e-b84b-4bff-9680-51341eb3864c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.804450 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b6b4562e-b84b-4bff-9680-51341eb3864c" (UID: "b6b4562e-b84b-4bff-9680-51341eb3864c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.857761 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.857807 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:05 crc kubenswrapper[4904]: I1121 13:59:05.857820 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b6b4562e-b84b-4bff-9680-51341eb3864c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.450420 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerStarted","Data":"627e45cde73512938001be6413c7c771b94c9a51e925e0fe24910810af788fee"} Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.463945 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerStarted","Data":"a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa"} Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.529446 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.491857628 podStartE2EDuration="5.529413657s" podCreationTimestamp="2025-11-21 13:59:01 +0000 UTC" firstStartedPulling="2025-11-21 13:59:02.6682731 +0000 UTC m=+1616.789805652" lastFinishedPulling="2025-11-21 13:59:05.705829129 +0000 UTC m=+1619.827361681" observedRunningTime="2025-11-21 13:59:06.496291956 +0000 UTC m=+1620.617824508" watchObservedRunningTime="2025-11-21 13:59:06.529413657 +0000 UTC m=+1620.650946209" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.531534 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.531968 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:59:06 crc kubenswrapper[4904]: E1121 13:59:06.532385 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.547782 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.547814 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" event={"ID":"b6b4562e-b84b-4bff-9680-51341eb3864c","Type":"ContainerDied","Data":"d111cc8a28852774899457d3e40fdb3814be479da0a61fc9372567a17b2deecd"} Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.547858 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.549194 4904 scope.go:117] "RemoveContainer" containerID="ced64c0214f8a24937656987fb3ae64059618624c3497fd118a15dfe39249512" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.572501 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.603874 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 13:59:06 crc kubenswrapper[4904]: I1121 13:59:06.642897 4904 scope.go:117] "RemoveContainer" containerID="9920714575c622c425b06946703efcd1674c685fbed6ae64cc874a21d48ae469" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.182483 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.316953 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-config-data\") pod \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.317013 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-scripts\") pod \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.317055 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsrbc\" (UniqueName: \"kubernetes.io/projected/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-kube-api-access-tsrbc\") pod \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.317342 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-combined-ca-bundle\") pod \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\" (UID: \"38e2ed93-b0dc-4698-a3e7-415f090e9ab2\") " Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.325068 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-scripts" (OuterVolumeSpecName: "scripts") pod "38e2ed93-b0dc-4698-a3e7-415f090e9ab2" (UID: "38e2ed93-b0dc-4698-a3e7-415f090e9ab2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.326967 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-kube-api-access-tsrbc" (OuterVolumeSpecName: "kube-api-access-tsrbc") pod "38e2ed93-b0dc-4698-a3e7-415f090e9ab2" (UID: "38e2ed93-b0dc-4698-a3e7-415f090e9ab2"). InnerVolumeSpecName "kube-api-access-tsrbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.355876 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38e2ed93-b0dc-4698-a3e7-415f090e9ab2" (UID: "38e2ed93-b0dc-4698-a3e7-415f090e9ab2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.367810 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-config-data" (OuterVolumeSpecName: "config-data") pod "38e2ed93-b0dc-4698-a3e7-415f090e9ab2" (UID: "38e2ed93-b0dc-4698-a3e7-415f090e9ab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.420796 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.420839 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.420853 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsrbc\" (UniqueName: \"kubernetes.io/projected/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-kube-api-access-tsrbc\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.420865 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e2ed93-b0dc-4698-a3e7-415f090e9ab2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.529090 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-p8zmt" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.529088 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-p8zmt" event={"ID":"38e2ed93-b0dc-4698-a3e7-415f090e9ab2","Type":"ContainerDied","Data":"0655f63261d3d9b92a0cdc142b549a61d559956d842e16d5d2c4e955a3f7971a"} Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.529252 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0655f63261d3d9b92a0cdc142b549a61d559956d842e16d5d2c4e955a3f7971a" Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.532631 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerStarted","Data":"4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c"} Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.722887 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.723708 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-log" containerID="cri-o://aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926" gracePeriod=30 Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.724283 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-api" containerID="cri-o://092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83" gracePeriod=30 Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.744424 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.744790 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerName="nova-scheduler-scheduler" containerID="cri-o://6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" gracePeriod=30 Nov 21 13:59:07 crc kubenswrapper[4904]: I1121 13:59:07.759817 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.237262 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.239392 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.241493 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.241557 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerName="nova-scheduler-scheduler" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.446954 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547477 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee512ba-128b-4864-ac07-391b6e73ecc4-logs\") pod \"1ee512ba-128b-4864-ac07-391b6e73ecc4\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547760 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-config-data\") pod \"1ee512ba-128b-4864-ac07-391b6e73ecc4\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547800 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-internal-tls-certs\") pod \"1ee512ba-128b-4864-ac07-391b6e73ecc4\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547892 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-public-tls-certs\") pod \"1ee512ba-128b-4864-ac07-391b6e73ecc4\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547935 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-combined-ca-bundle\") pod \"1ee512ba-128b-4864-ac07-391b6e73ecc4\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547938 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerID="092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83" exitCode=0 Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547972 4904 generic.go:334] "Generic (PLEG): container finished" podID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerID="aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926" exitCode=143 Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.547979 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6hqh\" (UniqueName: \"kubernetes.io/projected/1ee512ba-128b-4864-ac07-391b6e73ecc4-kube-api-access-j6hqh\") pod \"1ee512ba-128b-4864-ac07-391b6e73ecc4\" (UID: \"1ee512ba-128b-4864-ac07-391b6e73ecc4\") " Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.548275 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.548409 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee512ba-128b-4864-ac07-391b6e73ecc4-logs" (OuterVolumeSpecName: "logs") pod "1ee512ba-128b-4864-ac07-391b6e73ecc4" (UID: "1ee512ba-128b-4864-ac07-391b6e73ecc4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.548840 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ee512ba-128b-4864-ac07-391b6e73ecc4","Type":"ContainerDied","Data":"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83"} Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.548875 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ee512ba-128b-4864-ac07-391b6e73ecc4","Type":"ContainerDied","Data":"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926"} Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.548887 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1ee512ba-128b-4864-ac07-391b6e73ecc4","Type":"ContainerDied","Data":"edc54dbac90988ba1389c8bffbbeabf5e6cd07a5b3bc8a8d34a92d5909a97a5e"} Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.548904 4904 scope.go:117] "RemoveContainer" containerID="092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.549501 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee512ba-128b-4864-ac07-391b6e73ecc4-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.575283 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee512ba-128b-4864-ac07-391b6e73ecc4-kube-api-access-j6hqh" (OuterVolumeSpecName: "kube-api-access-j6hqh") pod "1ee512ba-128b-4864-ac07-391b6e73ecc4" (UID: "1ee512ba-128b-4864-ac07-391b6e73ecc4"). InnerVolumeSpecName "kube-api-access-j6hqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.593475 4904 scope.go:117] "RemoveContainer" containerID="aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.599575 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-config-data" (OuterVolumeSpecName: "config-data") pod "1ee512ba-128b-4864-ac07-391b6e73ecc4" (UID: "1ee512ba-128b-4864-ac07-391b6e73ecc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.600192 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ee512ba-128b-4864-ac07-391b6e73ecc4" (UID: "1ee512ba-128b-4864-ac07-391b6e73ecc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.627422 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1ee512ba-128b-4864-ac07-391b6e73ecc4" (UID: "1ee512ba-128b-4864-ac07-391b6e73ecc4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.637465 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1ee512ba-128b-4864-ac07-391b6e73ecc4" (UID: "1ee512ba-128b-4864-ac07-391b6e73ecc4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.652832 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.652878 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.652894 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6hqh\" (UniqueName: \"kubernetes.io/projected/1ee512ba-128b-4864-ac07-391b6e73ecc4-kube-api-access-j6hqh\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.652911 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.652922 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ee512ba-128b-4864-ac07-391b6e73ecc4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.816093 4904 scope.go:117] "RemoveContainer" containerID="092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83" Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.817123 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83\": container with ID starting with 092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83 not found: ID does not exist" containerID="092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.817166 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83"} err="failed to get container status \"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83\": rpc error: code = NotFound desc = could not find container \"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83\": container with ID starting with 092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83 not found: ID does not exist" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.817198 4904 scope.go:117] "RemoveContainer" containerID="aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926" Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.817748 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926\": container with ID starting with aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926 not found: ID does not exist" containerID="aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.817854 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926"} err="failed to get container status \"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926\": rpc error: code = NotFound desc = could not find container \"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926\": container with ID starting with aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926 not found: ID does not exist" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.817966 4904 scope.go:117] "RemoveContainer" containerID="092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.818428 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83"} err="failed to get container status \"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83\": rpc error: code = NotFound desc = could not find container \"092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83\": container with ID starting with 092a026f07218621354b7c255634ccf9d3a384b0aa53d2d83b7a1b3cadc01b83 not found: ID does not exist" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.818464 4904 scope.go:117] "RemoveContainer" containerID="aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.819115 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926"} err="failed to get container status \"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926\": rpc error: code = NotFound desc = could not find container \"aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926\": container with ID starting with aba781ecdff19d756fb87db84cd9ae1cc25cd60e6408b654799b0d9334508926 not found: ID does not exist" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.895723 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.905752 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.926883 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.928447 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e2ed93-b0dc-4698-a3e7-415f090e9ab2" containerName="nova-manage" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.928560 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e2ed93-b0dc-4698-a3e7-415f090e9ab2" containerName="nova-manage" Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.928704 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-api" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.928849 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-api" Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.928952 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerName="init" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929012 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerName="init" Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.929077 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-log" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929126 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-log" Nov 21 13:59:08 crc kubenswrapper[4904]: E1121 13:59:08.929188 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerName="dnsmasq-dns" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929268 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerName="dnsmasq-dns" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929539 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" containerName="dnsmasq-dns" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929633 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-log" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929716 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" containerName="nova-api-api" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.929773 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e2ed93-b0dc-4698-a3e7-415f090e9ab2" containerName="nova-manage" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.931369 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.956282 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.957324 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.966762 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 21 13:59:08 crc kubenswrapper[4904]: I1121 13:59:08.997746 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.070429 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4m5\" (UniqueName: \"kubernetes.io/projected/9cab507d-ee28-4da3-9ed5-524c74530da5-kube-api-access-mn4m5\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.070556 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-config-data\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.070590 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.070631 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.070723 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.070757 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cab507d-ee28-4da3-9ed5-524c74530da5-logs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.173429 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.173576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.173616 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cab507d-ee28-4da3-9ed5-524c74530da5-logs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.173721 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4m5\" (UniqueName: \"kubernetes.io/projected/9cab507d-ee28-4da3-9ed5-524c74530da5-kube-api-access-mn4m5\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.173779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-config-data\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.173806 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.174555 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cab507d-ee28-4da3-9ed5-524c74530da5-logs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.183488 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.184569 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-config-data\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.188019 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-public-tls-certs\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.210642 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cab507d-ee28-4da3-9ed5-524c74530da5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.210853 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4m5\" (UniqueName: \"kubernetes.io/projected/9cab507d-ee28-4da3-9ed5-524c74530da5-kube-api-access-mn4m5\") pod \"nova-api-0\" (UID: \"9cab507d-ee28-4da3-9ed5-524c74530da5\") " pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.260527 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.575428 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerStarted","Data":"c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6"} Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.578335 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.580732 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-log" containerID="cri-o://c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3" gracePeriod=30 Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.580869 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-metadata" containerID="cri-o://8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185" gracePeriod=30 Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.615514 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.33359235 podStartE2EDuration="6.615479315s" podCreationTimestamp="2025-11-21 13:59:03 +0000 UTC" firstStartedPulling="2025-11-21 13:59:04.364332773 +0000 UTC m=+1618.485865345" lastFinishedPulling="2025-11-21 13:59:08.646219758 +0000 UTC m=+1622.767752310" observedRunningTime="2025-11-21 13:59:09.608459193 +0000 UTC m=+1623.729991745" watchObservedRunningTime="2025-11-21 13:59:09.615479315 +0000 UTC m=+1623.737011867" Nov 21 13:59:09 crc kubenswrapper[4904]: I1121 13:59:09.830406 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 21 13:59:09 crc kubenswrapper[4904]: W1121 13:59:09.837932 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cab507d_ee28_4da3_9ed5_524c74530da5.slice/crio-d1cee087b35f11660f7635988bc37cbc7170b6218d8528157a2a5f49985d92c1 WatchSource:0}: Error finding container d1cee087b35f11660f7635988bc37cbc7170b6218d8528157a2a5f49985d92c1: Status 404 returned error can't find the container with id d1cee087b35f11660f7635988bc37cbc7170b6218d8528157a2a5f49985d92c1 Nov 21 13:59:10 crc kubenswrapper[4904]: I1121 13:59:10.534536 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee512ba-128b-4864-ac07-391b6e73ecc4" path="/var/lib/kubelet/pods/1ee512ba-128b-4864-ac07-391b6e73ecc4/volumes" Nov 21 13:59:10 crc kubenswrapper[4904]: I1121 13:59:10.594399 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cab507d-ee28-4da3-9ed5-524c74530da5","Type":"ContainerStarted","Data":"f49308f2a13beb2ebb96b3f0cb9d6a2cd189b18097762659a424d9cc94088a80"} Nov 21 13:59:10 crc kubenswrapper[4904]: I1121 13:59:10.594449 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cab507d-ee28-4da3-9ed5-524c74530da5","Type":"ContainerStarted","Data":"5a0f2992a506e7de302410d3a1c2999ce31f852e7c058d5a9b3282ac08849773"} Nov 21 13:59:10 crc kubenswrapper[4904]: I1121 13:59:10.594461 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9cab507d-ee28-4da3-9ed5-524c74530da5","Type":"ContainerStarted","Data":"d1cee087b35f11660f7635988bc37cbc7170b6218d8528157a2a5f49985d92c1"} Nov 21 13:59:10 crc kubenswrapper[4904]: I1121 13:59:10.601361 4904 generic.go:334] "Generic (PLEG): container finished" podID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerID="c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3" exitCode=143 Nov 21 13:59:10 crc kubenswrapper[4904]: I1121 13:59:10.601648 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480918e8-d31c-4ef0-a875-8386e2709cd2","Type":"ContainerDied","Data":"c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3"} Nov 21 13:59:12 crc kubenswrapper[4904]: I1121 13:59:12.733482 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": read tcp 10.217.0.2:35998->10.217.0.236:8775: read: connection reset by peer" Nov 21 13:59:12 crc kubenswrapper[4904]: I1121 13:59:12.733561 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": read tcp 10.217.0.2:36000->10.217.0.236:8775: read: connection reset by peer" Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.233578 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630 is running failed: container process not found" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.235711 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630 is running failed: container process not found" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.236266 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630 is running failed: container process not found" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.236385 4904 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerName="nova-scheduler-scheduler" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.306229 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.334856 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.334832528 podStartE2EDuration="5.334832528s" podCreationTimestamp="2025-11-21 13:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:10.614641904 +0000 UTC m=+1624.736174456" watchObservedRunningTime="2025-11-21 13:59:13.334832528 +0000 UTC m=+1627.456365090" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.430098 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-config-data\") pod \"480918e8-d31c-4ef0-a875-8386e2709cd2\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.430394 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480918e8-d31c-4ef0-a875-8386e2709cd2-logs\") pod \"480918e8-d31c-4ef0-a875-8386e2709cd2\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.430451 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4rmw\" (UniqueName: \"kubernetes.io/projected/480918e8-d31c-4ef0-a875-8386e2709cd2-kube-api-access-d4rmw\") pod \"480918e8-d31c-4ef0-a875-8386e2709cd2\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.430482 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-combined-ca-bundle\") pod \"480918e8-d31c-4ef0-a875-8386e2709cd2\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.430574 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-nova-metadata-tls-certs\") pod \"480918e8-d31c-4ef0-a875-8386e2709cd2\" (UID: \"480918e8-d31c-4ef0-a875-8386e2709cd2\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.432942 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/480918e8-d31c-4ef0-a875-8386e2709cd2-logs" (OuterVolumeSpecName: "logs") pod "480918e8-d31c-4ef0-a875-8386e2709cd2" (UID: "480918e8-d31c-4ef0-a875-8386e2709cd2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.437555 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480918e8-d31c-4ef0-a875-8386e2709cd2-kube-api-access-d4rmw" (OuterVolumeSpecName: "kube-api-access-d4rmw") pod "480918e8-d31c-4ef0-a875-8386e2709cd2" (UID: "480918e8-d31c-4ef0-a875-8386e2709cd2"). InnerVolumeSpecName "kube-api-access-d4rmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.442699 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.467691 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-config-data" (OuterVolumeSpecName: "config-data") pod "480918e8-d31c-4ef0-a875-8386e2709cd2" (UID: "480918e8-d31c-4ef0-a875-8386e2709cd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.484932 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "480918e8-d31c-4ef0-a875-8386e2709cd2" (UID: "480918e8-d31c-4ef0-a875-8386e2709cd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.517480 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "480918e8-d31c-4ef0-a875-8386e2709cd2" (UID: "480918e8-d31c-4ef0-a875-8386e2709cd2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.533842 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82nwk\" (UniqueName: \"kubernetes.io/projected/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-kube-api-access-82nwk\") pod \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.534216 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-config-data\") pod \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.534378 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-combined-ca-bundle\") pod \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\" (UID: \"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4\") " Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.535485 4904 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.535580 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.535721 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480918e8-d31c-4ef0-a875-8386e2709cd2-logs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.535794 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4rmw\" (UniqueName: \"kubernetes.io/projected/480918e8-d31c-4ef0-a875-8386e2709cd2-kube-api-access-d4rmw\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.535859 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480918e8-d31c-4ef0-a875-8386e2709cd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.539303 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-kube-api-access-82nwk" (OuterVolumeSpecName: "kube-api-access-82nwk") pod "bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" (UID: "bcd844ba-9582-4af8-b2ee-d9adf61ad9f4"). InnerVolumeSpecName "kube-api-access-82nwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.567416 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-config-data" (OuterVolumeSpecName: "config-data") pod "bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" (UID: "bcd844ba-9582-4af8-b2ee-d9adf61ad9f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.567872 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" (UID: "bcd844ba-9582-4af8-b2ee-d9adf61ad9f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.642597 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82nwk\" (UniqueName: \"kubernetes.io/projected/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-kube-api-access-82nwk\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.644052 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.645316 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.656773 4904 generic.go:334] "Generic (PLEG): container finished" podID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerID="8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185" exitCode=0 Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.656881 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.657179 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480918e8-d31c-4ef0-a875-8386e2709cd2","Type":"ContainerDied","Data":"8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185"} Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.657286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"480918e8-d31c-4ef0-a875-8386e2709cd2","Type":"ContainerDied","Data":"00d81d5624ef1811cf34da4cb39efec2728758e63bc3be0906f22cc43126f52a"} Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.657318 4904 scope.go:117] "RemoveContainer" containerID="8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.659074 4904 generic.go:334] "Generic (PLEG): container finished" podID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" exitCode=0 Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.659170 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4","Type":"ContainerDied","Data":"6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630"} Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.659224 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bcd844ba-9582-4af8-b2ee-d9adf61ad9f4","Type":"ContainerDied","Data":"82d41089d5b62323d945e248edc960094f41197d093442972b1e93643a7e345a"} Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.659183 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.721268 4904 scope.go:117] "RemoveContainer" containerID="c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.733804 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.749133 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.758944 4904 scope.go:117] "RemoveContainer" containerID="8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185" Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.759411 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185\": container with ID starting with 8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185 not found: ID does not exist" containerID="8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.759451 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185"} err="failed to get container status \"8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185\": rpc error: code = NotFound desc = could not find container \"8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185\": container with ID starting with 8e84097e929bc781c6819953540065a68268d614e7efc4712a60ed0f47274185 not found: ID does not exist" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.759472 4904 scope.go:117] "RemoveContainer" containerID="c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3" Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.760304 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3\": container with ID starting with c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3 not found: ID does not exist" containerID="c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.760331 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3"} err="failed to get container status \"c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3\": rpc error: code = NotFound desc = could not find container \"c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3\": container with ID starting with c9d612deba084befc40a8f4ec3faff7076034f760d3a5f8974f7898d06b271b3 not found: ID does not exist" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.760349 4904 scope.go:117] "RemoveContainer" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.767874 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.787837 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.788475 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerName="nova-scheduler-scheduler" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.788505 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerName="nova-scheduler-scheduler" Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.788545 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-metadata" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.788556 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-metadata" Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.788599 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-log" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.788607 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-log" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.788884 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-log" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.788904 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" containerName="nova-metadata-metadata" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.788917 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" containerName="nova-scheduler-scheduler" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.790576 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.797341 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.805367 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.805618 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.808005 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.811263 4904 scope.go:117] "RemoveContainer" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" Nov 21 13:59:13 crc kubenswrapper[4904]: E1121 13:59:13.811872 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630\": container with ID starting with 6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630 not found: ID does not exist" containerID="6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.811918 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630"} err="failed to get container status \"6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630\": rpc error: code = NotFound desc = could not find container \"6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630\": container with ID starting with 6719db8736aaa59a84ed487982a24added3247da7aab8e1e3de24318721af630 not found: ID does not exist" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.817878 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.819903 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.826322 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.829749 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.850350 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkkr\" (UniqueName: \"kubernetes.io/projected/56ae0c0e-1f08-4114-adf0-e4a915d519aa-kube-api-access-bnkkr\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.850399 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-config-data\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.850457 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ae0c0e-1f08-4114-adf0-e4a915d519aa-logs\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.850497 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.850577 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.953105 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnkkr\" (UniqueName: \"kubernetes.io/projected/56ae0c0e-1f08-4114-adf0-e4a915d519aa-kube-api-access-bnkkr\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.953170 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-config-data\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.953262 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ae0c0e-1f08-4114-adf0-e4a915d519aa-logs\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.953409 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.953936 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56ae0c0e-1f08-4114-adf0-e4a915d519aa-logs\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.954027 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.954088 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6891983-1117-4522-9431-2c73ac552c8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.954162 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6891983-1117-4522-9431-2c73ac552c8a-config-data\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.954259 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59g8b\" (UniqueName: \"kubernetes.io/projected/e6891983-1117-4522-9431-2c73ac552c8a-kube-api-access-59g8b\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.958262 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.958539 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.963537 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56ae0c0e-1f08-4114-adf0-e4a915d519aa-config-data\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:13 crc kubenswrapper[4904]: I1121 13:59:13.975374 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnkkr\" (UniqueName: \"kubernetes.io/projected/56ae0c0e-1f08-4114-adf0-e4a915d519aa-kube-api-access-bnkkr\") pod \"nova-metadata-0\" (UID: \"56ae0c0e-1f08-4114-adf0-e4a915d519aa\") " pod="openstack/nova-metadata-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.056562 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6891983-1117-4522-9431-2c73ac552c8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.056688 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6891983-1117-4522-9431-2c73ac552c8a-config-data\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.056733 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59g8b\" (UniqueName: \"kubernetes.io/projected/e6891983-1117-4522-9431-2c73ac552c8a-kube-api-access-59g8b\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.061980 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6891983-1117-4522-9431-2c73ac552c8a-config-data\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.062041 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6891983-1117-4522-9431-2c73ac552c8a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.077132 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59g8b\" (UniqueName: \"kubernetes.io/projected/e6891983-1117-4522-9431-2c73ac552c8a-kube-api-access-59g8b\") pod \"nova-scheduler-0\" (UID: \"e6891983-1117-4522-9431-2c73ac552c8a\") " pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.201031 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.215355 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.526620 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480918e8-d31c-4ef0-a875-8386e2709cd2" path="/var/lib/kubelet/pods/480918e8-d31c-4ef0-a875-8386e2709cd2/volumes" Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.527738 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd844ba-9582-4af8-b2ee-d9adf61ad9f4" path="/var/lib/kubelet/pods/bcd844ba-9582-4af8-b2ee-d9adf61ad9f4/volumes" Nov 21 13:59:14 crc kubenswrapper[4904]: W1121 13:59:14.727117 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56ae0c0e_1f08_4114_adf0_e4a915d519aa.slice/crio-c85ba2762832bc19d6af6232d3935ca0fb93b2f8ef7262793c99ae0591421621 WatchSource:0}: Error finding container c85ba2762832bc19d6af6232d3935ca0fb93b2f8ef7262793c99ae0591421621: Status 404 returned error can't find the container with id c85ba2762832bc19d6af6232d3935ca0fb93b2f8ef7262793c99ae0591421621 Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.739587 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 21 13:59:14 crc kubenswrapper[4904]: I1121 13:59:14.750632 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.693722 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6891983-1117-4522-9431-2c73ac552c8a","Type":"ContainerStarted","Data":"d0970a9afbcb7c62ff14a6a3417b1dd93ebaf06a3e95430e0f7761573dabb93b"} Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.694570 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6891983-1117-4522-9431-2c73ac552c8a","Type":"ContainerStarted","Data":"5d2c6e1175047bc1ca9408b2928aad4a04820ad51aacbcfdea978985b6a02470"} Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.701374 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ae0c0e-1f08-4114-adf0-e4a915d519aa","Type":"ContainerStarted","Data":"a3fe24e2719f1745b5c1f5d77310ee8e5f0d10f83487d1784a78c31fb6af0008"} Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.701434 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ae0c0e-1f08-4114-adf0-e4a915d519aa","Type":"ContainerStarted","Data":"e96a566b9979b5e1721e3d6724d06552c0db90a05cdcf3c0dfa14a1e2b418b7d"} Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.701451 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"56ae0c0e-1f08-4114-adf0-e4a915d519aa","Type":"ContainerStarted","Data":"c85ba2762832bc19d6af6232d3935ca0fb93b2f8ef7262793c99ae0591421621"} Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.713869 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.713847273 podStartE2EDuration="2.713847273s" podCreationTimestamp="2025-11-21 13:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:15.71125406 +0000 UTC m=+1629.832786622" watchObservedRunningTime="2025-11-21 13:59:15.713847273 +0000 UTC m=+1629.835379825" Nov 21 13:59:15 crc kubenswrapper[4904]: I1121 13:59:15.739897 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.739834971 podStartE2EDuration="2.739834971s" podCreationTimestamp="2025-11-21 13:59:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 13:59:15.730281346 +0000 UTC m=+1629.851813918" watchObservedRunningTime="2025-11-21 13:59:15.739834971 +0000 UTC m=+1629.861367553" Nov 21 13:59:19 crc kubenswrapper[4904]: I1121 13:59:19.201201 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 13:59:19 crc kubenswrapper[4904]: I1121 13:59:19.201880 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 21 13:59:19 crc kubenswrapper[4904]: I1121 13:59:19.215755 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 21 13:59:19 crc kubenswrapper[4904]: I1121 13:59:19.261836 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 13:59:19 crc kubenswrapper[4904]: I1121 13:59:19.261894 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 21 13:59:20 crc kubenswrapper[4904]: I1121 13:59:20.307001 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9cab507d-ee28-4da3-9ed5-524c74530da5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.243:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:20 crc kubenswrapper[4904]: I1121 13:59:20.307074 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9cab507d-ee28-4da3-9ed5-524c74530da5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:20 crc kubenswrapper[4904]: I1121 13:59:20.513733 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:59:20 crc kubenswrapper[4904]: E1121 13:59:20.514167 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:59:24 crc kubenswrapper[4904]: I1121 13:59:24.201884 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 13:59:24 crc kubenswrapper[4904]: I1121 13:59:24.202462 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 21 13:59:24 crc kubenswrapper[4904]: I1121 13:59:24.216130 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 21 13:59:24 crc kubenswrapper[4904]: I1121 13:59:24.257317 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 21 13:59:24 crc kubenswrapper[4904]: I1121 13:59:24.864597 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 21 13:59:25 crc kubenswrapper[4904]: I1121 13:59:25.219947 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="56ae0c0e-1f08-4114-adf0-e4a915d519aa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:25 crc kubenswrapper[4904]: I1121 13:59:25.219996 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="56ae0c0e-1f08-4114-adf0-e4a915d519aa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 21 13:59:29 crc kubenswrapper[4904]: I1121 13:59:29.269792 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 13:59:29 crc kubenswrapper[4904]: I1121 13:59:29.270481 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 21 13:59:29 crc kubenswrapper[4904]: I1121 13:59:29.270755 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 13:59:29 crc kubenswrapper[4904]: I1121 13:59:29.270811 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 21 13:59:29 crc kubenswrapper[4904]: I1121 13:59:29.277547 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 13:59:29 crc kubenswrapper[4904]: I1121 13:59:29.277606 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 21 13:59:31 crc kubenswrapper[4904]: I1121 13:59:31.514363 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:59:31 crc kubenswrapper[4904]: E1121 13:59:31.515085 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:59:33 crc kubenswrapper[4904]: I1121 13:59:33.848071 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 13:59:34 crc kubenswrapper[4904]: I1121 13:59:34.207899 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 13:59:34 crc kubenswrapper[4904]: I1121 13:59:34.208861 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 21 13:59:34 crc kubenswrapper[4904]: I1121 13:59:34.213082 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 13:59:34 crc kubenswrapper[4904]: I1121 13:59:34.979977 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 21 13:59:36 crc kubenswrapper[4904]: I1121 13:59:36.572586 4904 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb6b4562e-b84b-4bff-9680-51341eb3864c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb6b4562e-b84b-4bff-9680-51341eb3864c] : Timed out while waiting for systemd to remove kubepods-besteffort-podb6b4562e_b84b_4bff_9680_51341eb3864c.slice" Nov 21 13:59:36 crc kubenswrapper[4904]: E1121 13:59:36.572692 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb6b4562e-b84b-4bff-9680-51341eb3864c] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb6b4562e-b84b-4bff-9680-51341eb3864c] : Timed out while waiting for systemd to remove kubepods-besteffort-podb6b4562e_b84b_4bff_9680_51341eb3864c.slice" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" Nov 21 13:59:37 crc kubenswrapper[4904]: I1121 13:59:36.999831 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-c8rrf" Nov 21 13:59:37 crc kubenswrapper[4904]: I1121 13:59:37.067816 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8rrf"] Nov 21 13:59:37 crc kubenswrapper[4904]: I1121 13:59:37.080378 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-c8rrf"] Nov 21 13:59:38 crc kubenswrapper[4904]: I1121 13:59:38.529421 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b4562e-b84b-4bff-9680-51341eb3864c" path="/var/lib/kubelet/pods/b6b4562e-b84b-4bff-9680-51341eb3864c/volumes" Nov 21 13:59:38 crc kubenswrapper[4904]: I1121 13:59:38.656083 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:59:38 crc kubenswrapper[4904]: I1121 13:59:38.656398 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4c951951-b705-4ab1-b041-887d038f35ec" containerName="kube-state-metrics" containerID="cri-o://4135b5b1c43c893249862613ded08d1c2602a90d6ec1282a8c7bdb6aa24a31b6" gracePeriod=30 Nov 21 13:59:38 crc kubenswrapper[4904]: I1121 13:59:38.745478 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:59:38 crc kubenswrapper[4904]: I1121 13:59:38.746193 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="1bebdadb-691e-41d5-8bff-00fe08591c75" containerName="mysqld-exporter" containerID="cri-o://7c9159023caa78eea12ced90dc14587e997d300cc1d4acb1b2f582a122b354ca" gracePeriod=30 Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.116199 4904 generic.go:334] "Generic (PLEG): container finished" podID="1bebdadb-691e-41d5-8bff-00fe08591c75" containerID="7c9159023caa78eea12ced90dc14587e997d300cc1d4acb1b2f582a122b354ca" exitCode=2 Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.116756 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1bebdadb-691e-41d5-8bff-00fe08591c75","Type":"ContainerDied","Data":"7c9159023caa78eea12ced90dc14587e997d300cc1d4acb1b2f582a122b354ca"} Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.138098 4904 generic.go:334] "Generic (PLEG): container finished" podID="4c951951-b705-4ab1-b041-887d038f35ec" containerID="4135b5b1c43c893249862613ded08d1c2602a90d6ec1282a8c7bdb6aa24a31b6" exitCode=2 Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.138153 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c951951-b705-4ab1-b041-887d038f35ec","Type":"ContainerDied","Data":"4135b5b1c43c893249862613ded08d1c2602a90d6ec1282a8c7bdb6aa24a31b6"} Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.388972 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.458422 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcpmd\" (UniqueName: \"kubernetes.io/projected/4c951951-b705-4ab1-b041-887d038f35ec-kube-api-access-xcpmd\") pod \"4c951951-b705-4ab1-b041-887d038f35ec\" (UID: \"4c951951-b705-4ab1-b041-887d038f35ec\") " Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.470149 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c951951-b705-4ab1-b041-887d038f35ec-kube-api-access-xcpmd" (OuterVolumeSpecName: "kube-api-access-xcpmd") pod "4c951951-b705-4ab1-b041-887d038f35ec" (UID: "4c951951-b705-4ab1-b041-887d038f35ec"). InnerVolumeSpecName "kube-api-access-xcpmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.488637 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.561464 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-config-data\") pod \"1bebdadb-691e-41d5-8bff-00fe08591c75\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.562027 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-combined-ca-bundle\") pod \"1bebdadb-691e-41d5-8bff-00fe08591c75\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.562335 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp69d\" (UniqueName: \"kubernetes.io/projected/1bebdadb-691e-41d5-8bff-00fe08591c75-kube-api-access-qp69d\") pod \"1bebdadb-691e-41d5-8bff-00fe08591c75\" (UID: \"1bebdadb-691e-41d5-8bff-00fe08591c75\") " Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.563323 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcpmd\" (UniqueName: \"kubernetes.io/projected/4c951951-b705-4ab1-b041-887d038f35ec-kube-api-access-xcpmd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.583471 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bebdadb-691e-41d5-8bff-00fe08591c75-kube-api-access-qp69d" (OuterVolumeSpecName: "kube-api-access-qp69d") pod "1bebdadb-691e-41d5-8bff-00fe08591c75" (UID: "1bebdadb-691e-41d5-8bff-00fe08591c75"). InnerVolumeSpecName "kube-api-access-qp69d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.610711 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bebdadb-691e-41d5-8bff-00fe08591c75" (UID: "1bebdadb-691e-41d5-8bff-00fe08591c75"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.654457 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-config-data" (OuterVolumeSpecName: "config-data") pod "1bebdadb-691e-41d5-8bff-00fe08591c75" (UID: "1bebdadb-691e-41d5-8bff-00fe08591c75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.666482 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp69d\" (UniqueName: \"kubernetes.io/projected/1bebdadb-691e-41d5-8bff-00fe08591c75-kube-api-access-qp69d\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.666530 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:39 crc kubenswrapper[4904]: I1121 13:59:39.666547 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bebdadb-691e-41d5-8bff-00fe08591c75-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.154677 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1bebdadb-691e-41d5-8bff-00fe08591c75","Type":"ContainerDied","Data":"35492a826c3ab44fab48396522b6d29c4bee852785a9521cd21ca1dea2fd3fa3"} Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.154724 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.154774 4904 scope.go:117] "RemoveContainer" containerID="7c9159023caa78eea12ced90dc14587e997d300cc1d4acb1b2f582a122b354ca" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.158711 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4c951951-b705-4ab1-b041-887d038f35ec","Type":"ContainerDied","Data":"0f5b1aafb68eda6b6f2a7806c503378c5eb49ea3426aca72eee21aa1fa832e6d"} Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.158824 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.220791 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.234304 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.249719 4904 scope.go:117] "RemoveContainer" containerID="4135b5b1c43c893249862613ded08d1c2602a90d6ec1282a8c7bdb6aa24a31b6" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.257456 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.273429 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.282724 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: E1121 13:59:40.283742 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c951951-b705-4ab1-b041-887d038f35ec" containerName="kube-state-metrics" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.283769 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c951951-b705-4ab1-b041-887d038f35ec" containerName="kube-state-metrics" Nov 21 13:59:40 crc kubenswrapper[4904]: E1121 13:59:40.283781 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bebdadb-691e-41d5-8bff-00fe08591c75" containerName="mysqld-exporter" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.283791 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bebdadb-691e-41d5-8bff-00fe08591c75" containerName="mysqld-exporter" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.284137 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bebdadb-691e-41d5-8bff-00fe08591c75" containerName="mysqld-exporter" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.284177 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c951951-b705-4ab1-b041-887d038f35ec" containerName="kube-state-metrics" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.285712 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.291260 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.291708 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.297284 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.319497 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.321674 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.324455 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.324793 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.331509 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.386710 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.386784 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-config-data\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.386850 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.386880 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njcsh\" (UniqueName: \"kubernetes.io/projected/78e4e986-d20d-4494-bff4-3cc0fb8af825-kube-api-access-njcsh\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.386904 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvsq\" (UniqueName: \"kubernetes.io/projected/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-api-access-jwvsq\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.386947 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.387029 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.387055 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.489248 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.489346 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-config-data\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.489422 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.489465 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njcsh\" (UniqueName: \"kubernetes.io/projected/78e4e986-d20d-4494-bff4-3cc0fb8af825-kube-api-access-njcsh\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.489495 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvsq\" (UniqueName: \"kubernetes.io/projected/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-api-access-jwvsq\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.489556 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.490529 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.490576 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.496326 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-config-data\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.496327 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.496596 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.497097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.497535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.498794 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/78e4e986-d20d-4494-bff4-3cc0fb8af825-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.513542 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njcsh\" (UniqueName: \"kubernetes.io/projected/78e4e986-d20d-4494-bff4-3cc0fb8af825-kube-api-access-njcsh\") pod \"mysqld-exporter-0\" (UID: \"78e4e986-d20d-4494-bff4-3cc0fb8af825\") " pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.516825 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvsq\" (UniqueName: \"kubernetes.io/projected/a18139cc-0f50-4c9f-bbb2-7637d2a3c299-kube-api-access-jwvsq\") pod \"kube-state-metrics-0\" (UID: \"a18139cc-0f50-4c9f-bbb2-7637d2a3c299\") " pod="openstack/kube-state-metrics-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.533013 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bebdadb-691e-41d5-8bff-00fe08591c75" path="/var/lib/kubelet/pods/1bebdadb-691e-41d5-8bff-00fe08591c75/volumes" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.534708 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c951951-b705-4ab1-b041-887d038f35ec" path="/var/lib/kubelet/pods/4c951951-b705-4ab1-b041-887d038f35ec/volumes" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.613711 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 21 13:59:40 crc kubenswrapper[4904]: I1121 13:59:40.643749 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.174461 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.270515 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 21 13:59:41 crc kubenswrapper[4904]: W1121 13:59:41.277584 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda18139cc_0f50_4c9f_bbb2_7637d2a3c299.slice/crio-a0cac1a90c99c838f5146caa4fbd97ad441d167480d13945af91aea819dbbebb WatchSource:0}: Error finding container a0cac1a90c99c838f5146caa4fbd97ad441d167480d13945af91aea819dbbebb: Status 404 returned error can't find the container with id a0cac1a90c99c838f5146caa4fbd97ad441d167480d13945af91aea819dbbebb Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.358034 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.359007 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-central-agent" containerID="cri-o://b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7" gracePeriod=30 Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.359103 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="sg-core" containerID="cri-o://4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c" gracePeriod=30 Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.359160 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="proxy-httpd" containerID="cri-o://c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6" gracePeriod=30 Nov 21 13:59:41 crc kubenswrapper[4904]: I1121 13:59:41.359204 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-notification-agent" containerID="cri-o://a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa" gracePeriod=30 Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.204538 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"78e4e986-d20d-4494-bff4-3cc0fb8af825","Type":"ContainerStarted","Data":"5f503e66fbf288e88a9125ad34e31e12b625e435473d1fa5395b1ec3f49a9518"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.205010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"78e4e986-d20d-4494-bff4-3cc0fb8af825","Type":"ContainerStarted","Data":"fe86e150a86357b143e23d842a6be80cce8b69c6d49e875c48744b656bc4a0fc"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.207332 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a18139cc-0f50-4c9f-bbb2-7637d2a3c299","Type":"ContainerStarted","Data":"87bde5dfac09ed1003937b00a2cfda704b14e663b0282e169648750b4838bbbd"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.207372 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a18139cc-0f50-4c9f-bbb2-7637d2a3c299","Type":"ContainerStarted","Data":"a0cac1a90c99c838f5146caa4fbd97ad441d167480d13945af91aea819dbbebb"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.207475 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.212188 4904 generic.go:334] "Generic (PLEG): container finished" podID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerID="c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6" exitCode=0 Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.212215 4904 generic.go:334] "Generic (PLEG): container finished" podID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerID="4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c" exitCode=2 Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.212225 4904 generic.go:334] "Generic (PLEG): container finished" podID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerID="b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7" exitCode=0 Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.212245 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerDied","Data":"c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.212268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerDied","Data":"4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.212278 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerDied","Data":"b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7"} Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.244954 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.625019964 podStartE2EDuration="2.24491976s" podCreationTimestamp="2025-11-21 13:59:40 +0000 UTC" firstStartedPulling="2025-11-21 13:59:41.180707277 +0000 UTC m=+1655.302239829" lastFinishedPulling="2025-11-21 13:59:41.800607073 +0000 UTC m=+1655.922139625" observedRunningTime="2025-11-21 13:59:42.226842277 +0000 UTC m=+1656.348374829" watchObservedRunningTime="2025-11-21 13:59:42.24491976 +0000 UTC m=+1656.366452332" Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.256249 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.773995572 podStartE2EDuration="2.256215226s" podCreationTimestamp="2025-11-21 13:59:40 +0000 UTC" firstStartedPulling="2025-11-21 13:59:41.284736045 +0000 UTC m=+1655.406268597" lastFinishedPulling="2025-11-21 13:59:41.766955699 +0000 UTC m=+1655.888488251" observedRunningTime="2025-11-21 13:59:42.246841626 +0000 UTC m=+1656.368374198" watchObservedRunningTime="2025-11-21 13:59:42.256215226 +0000 UTC m=+1656.377747788" Nov 21 13:59:42 crc kubenswrapper[4904]: I1121 13:59:42.513688 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:59:42 crc kubenswrapper[4904]: E1121 13:59:42.514481 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.813338 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.893851 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-log-httpd\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.894065 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-sg-core-conf-yaml\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.894750 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.896776 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjb2z\" (UniqueName: \"kubernetes.io/projected/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-kube-api-access-pjb2z\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.897368 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-config-data\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.897620 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-scripts\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.897964 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-combined-ca-bundle\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.899362 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-run-httpd\") pod \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\" (UID: \"e09a9fa2-cf84-47ba-9d53-64bac6c90ade\") " Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.899827 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.903331 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.903361 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.918800 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-scripts" (OuterVolumeSpecName: "scripts") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.918905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-kube-api-access-pjb2z" (OuterVolumeSpecName: "kube-api-access-pjb2z") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "kube-api-access-pjb2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:43 crc kubenswrapper[4904]: I1121 13:59:43.950373 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.006577 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjb2z\" (UniqueName: \"kubernetes.io/projected/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-kube-api-access-pjb2z\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.006611 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.006623 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.045879 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.060076 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-config-data" (OuterVolumeSpecName: "config-data") pod "e09a9fa2-cf84-47ba-9d53-64bac6c90ade" (UID: "e09a9fa2-cf84-47ba-9d53-64bac6c90ade"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.108418 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.108455 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e09a9fa2-cf84-47ba-9d53-64bac6c90ade-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.248715 4904 generic.go:334] "Generic (PLEG): container finished" podID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerID="a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa" exitCode=0 Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.248783 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerDied","Data":"a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa"} Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.248824 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e09a9fa2-cf84-47ba-9d53-64bac6c90ade","Type":"ContainerDied","Data":"dd131677f450e6b6033b4173dc9ad176832bb8f4b878a5047ff3d8901d4a57b9"} Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.248855 4904 scope.go:117] "RemoveContainer" containerID="c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.249011 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.299530 4904 scope.go:117] "RemoveContainer" containerID="4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.301030 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.319576 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.333207 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.334089 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-notification-agent" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334120 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-notification-agent" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.334144 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="sg-core" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334155 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="sg-core" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.334217 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="proxy-httpd" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334227 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="proxy-httpd" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.334251 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-central-agent" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334259 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-central-agent" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334575 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="proxy-httpd" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334606 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-notification-agent" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334635 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="sg-core" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.334707 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" containerName="ceilometer-central-agent" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.342168 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.344446 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.345809 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.346007 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.347152 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.361262 4904 scope.go:117] "RemoveContainer" containerID="a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.416905 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.416970 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-log-httpd\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.417010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.417037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-run-httpd\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.417071 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-config-data\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.417103 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.417127 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-scripts\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.417209 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk92g\" (UniqueName: \"kubernetes.io/projected/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-kube-api-access-pk92g\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.451767 4904 scope.go:117] "RemoveContainer" containerID="b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.481179 4904 scope.go:117] "RemoveContainer" containerID="c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.481966 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6\": container with ID starting with c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6 not found: ID does not exist" containerID="c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.482013 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6"} err="failed to get container status \"c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6\": rpc error: code = NotFound desc = could not find container \"c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6\": container with ID starting with c3597b1335ad8c819f9ddbf67ce6c6549cb9245f5d227ad3c21da3c28adc28c6 not found: ID does not exist" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.482095 4904 scope.go:117] "RemoveContainer" containerID="4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.482647 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c\": container with ID starting with 4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c not found: ID does not exist" containerID="4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.482722 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c"} err="failed to get container status \"4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c\": rpc error: code = NotFound desc = could not find container \"4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c\": container with ID starting with 4c5a5bd1c7ee9140df32e745732bd6a1a126daf372ca6b1384399564795b4b0c not found: ID does not exist" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.482743 4904 scope.go:117] "RemoveContainer" containerID="a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.483264 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa\": container with ID starting with a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa not found: ID does not exist" containerID="a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.483335 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa"} err="failed to get container status \"a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa\": rpc error: code = NotFound desc = could not find container \"a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa\": container with ID starting with a5797214663ca1e9aa527f415be56584ffc798752aab0f3c406490acf1eec7aa not found: ID does not exist" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.483395 4904 scope.go:117] "RemoveContainer" containerID="b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7" Nov 21 13:59:44 crc kubenswrapper[4904]: E1121 13:59:44.483884 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7\": container with ID starting with b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7 not found: ID does not exist" containerID="b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.483939 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7"} err="failed to get container status \"b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7\": rpc error: code = NotFound desc = could not find container \"b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7\": container with ID starting with b0f0aad8ee4833a62190eedefe0b0bcf78bd0a9a08919f27f8235b798955d2d7 not found: ID does not exist" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.518787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.518864 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-scripts\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk92g\" (UniqueName: \"kubernetes.io/projected/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-kube-api-access-pk92g\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519143 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-log-httpd\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519265 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519317 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-run-httpd\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-config-data\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.519951 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-log-httpd\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.520079 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-run-httpd\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.523317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-scripts\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.523503 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.525339 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.525894 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-config-data\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.526932 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.532777 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09a9fa2-cf84-47ba-9d53-64bac6c90ade" path="/var/lib/kubelet/pods/e09a9fa2-cf84-47ba-9d53-64bac6c90ade/volumes" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.538358 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk92g\" (UniqueName: \"kubernetes.io/projected/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-kube-api-access-pk92g\") pod \"ceilometer-0\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " pod="openstack/ceilometer-0" Nov 21 13:59:44 crc kubenswrapper[4904]: I1121 13:59:44.672970 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:45 crc kubenswrapper[4904]: W1121 13:59:45.012160 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88ea509c_baf0_4c78_9bab_3ece4c8e28f3.slice/crio-122b005c4ba91d26d527703fc91121d4d7ad5d1138f60b8f8e4d8cccd7dd99fc WatchSource:0}: Error finding container 122b005c4ba91d26d527703fc91121d4d7ad5d1138f60b8f8e4d8cccd7dd99fc: Status 404 returned error can't find the container with id 122b005c4ba91d26d527703fc91121d4d7ad5d1138f60b8f8e4d8cccd7dd99fc Nov 21 13:59:45 crc kubenswrapper[4904]: I1121 13:59:45.015357 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:45 crc kubenswrapper[4904]: I1121 13:59:45.271731 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerStarted","Data":"122b005c4ba91d26d527703fc91121d4d7ad5d1138f60b8f8e4d8cccd7dd99fc"} Nov 21 13:59:46 crc kubenswrapper[4904]: I1121 13:59:46.291237 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerStarted","Data":"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414"} Nov 21 13:59:47 crc kubenswrapper[4904]: I1121 13:59:47.304580 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerStarted","Data":"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c"} Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.317630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerStarted","Data":"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5"} Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.481966 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jpqkx"] Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.492337 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jpqkx"] Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.526405 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="517c919d-5531-457e-ae70-aa39ea0282a9" path="/var/lib/kubelet/pods/517c919d-5531-457e-ae70-aa39ea0282a9/volumes" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.572204 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-td4l8"] Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.575809 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.588152 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-td4l8"] Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.763420 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbcb\" (UniqueName: \"kubernetes.io/projected/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-kube-api-access-ztbcb\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.763762 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-config-data\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.763922 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-combined-ca-bundle\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.867072 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbcb\" (UniqueName: \"kubernetes.io/projected/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-kube-api-access-ztbcb\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.867126 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-config-data\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.867177 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-combined-ca-bundle\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.875841 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-combined-ca-bundle\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.885965 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-config-data\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.890574 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbcb\" (UniqueName: \"kubernetes.io/projected/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-kube-api-access-ztbcb\") pod \"heat-db-sync-td4l8\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:48 crc kubenswrapper[4904]: I1121 13:59:48.892888 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-td4l8" Nov 21 13:59:49 crc kubenswrapper[4904]: W1121 13:59:49.435543 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6cc886d_3403_4dab_82a8_35aacd9e2bc1.slice/crio-9934ec6645c770f719feef530e76c6deef14626f60da09c8856885cc1e264c54 WatchSource:0}: Error finding container 9934ec6645c770f719feef530e76c6deef14626f60da09c8856885cc1e264c54: Status 404 returned error can't find the container with id 9934ec6645c770f719feef530e76c6deef14626f60da09c8856885cc1e264c54 Nov 21 13:59:49 crc kubenswrapper[4904]: I1121 13:59:49.440151 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-td4l8"] Nov 21 13:59:50 crc kubenswrapper[4904]: I1121 13:59:50.012708 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 13:59:50 crc kubenswrapper[4904]: I1121 13:59:50.353446 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerStarted","Data":"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b"} Nov 21 13:59:50 crc kubenswrapper[4904]: I1121 13:59:50.355072 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 13:59:50 crc kubenswrapper[4904]: I1121 13:59:50.360745 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-td4l8" event={"ID":"a6cc886d-3403-4dab-82a8-35aacd9e2bc1","Type":"ContainerStarted","Data":"9934ec6645c770f719feef530e76c6deef14626f60da09c8856885cc1e264c54"} Nov 21 13:59:50 crc kubenswrapper[4904]: I1121 13:59:50.421823 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.383658869 podStartE2EDuration="6.421799972s" podCreationTimestamp="2025-11-21 13:59:44 +0000 UTC" firstStartedPulling="2025-11-21 13:59:45.016173535 +0000 UTC m=+1659.137706087" lastFinishedPulling="2025-11-21 13:59:49.054314638 +0000 UTC m=+1663.175847190" observedRunningTime="2025-11-21 13:59:50.4074324 +0000 UTC m=+1664.528964962" watchObservedRunningTime="2025-11-21 13:59:50.421799972 +0000 UTC m=+1664.543332524" Nov 21 13:59:50 crc kubenswrapper[4904]: I1121 13:59:50.655890 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 21 13:59:51 crc kubenswrapper[4904]: I1121 13:59:51.245405 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 13:59:52 crc kubenswrapper[4904]: I1121 13:59:52.763805 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:53 crc kubenswrapper[4904]: I1121 13:59:53.401820 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-central-agent" containerID="cri-o://9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" gracePeriod=30 Nov 21 13:59:53 crc kubenswrapper[4904]: I1121 13:59:53.402753 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="sg-core" containerID="cri-o://36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" gracePeriod=30 Nov 21 13:59:53 crc kubenswrapper[4904]: I1121 13:59:53.402777 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-notification-agent" containerID="cri-o://2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" gracePeriod=30 Nov 21 13:59:53 crc kubenswrapper[4904]: I1121 13:59:53.402758 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="proxy-httpd" containerID="cri-o://56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" gracePeriod=30 Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.318305 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358261 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-sg-core-conf-yaml\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358330 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-config-data\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358353 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-run-httpd\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358400 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk92g\" (UniqueName: \"kubernetes.io/projected/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-kube-api-access-pk92g\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358547 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-scripts\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358591 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-combined-ca-bundle\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358675 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-log-httpd\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.358736 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-ceilometer-tls-certs\") pod \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\" (UID: \"88ea509c-baf0-4c78-9bab-3ece4c8e28f3\") " Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.359050 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.360066 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.360676 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.368231 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-scripts" (OuterVolumeSpecName: "scripts") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.372080 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-kube-api-access-pk92g" (OuterVolumeSpecName: "kube-api-access-pk92g") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "kube-api-access-pk92g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.420462 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.431836 4904 generic.go:334] "Generic (PLEG): container finished" podID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" exitCode=0 Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.431881 4904 generic.go:334] "Generic (PLEG): container finished" podID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" exitCode=2 Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.431917 4904 generic.go:334] "Generic (PLEG): container finished" podID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" exitCode=0 Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.431928 4904 generic.go:334] "Generic (PLEG): container finished" podID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" exitCode=0 Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.431953 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerDied","Data":"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b"} Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.431987 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerDied","Data":"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5"} Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.432000 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerDied","Data":"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c"} Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.432011 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerDied","Data":"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414"} Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.432020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88ea509c-baf0-4c78-9bab-3ece4c8e28f3","Type":"ContainerDied","Data":"122b005c4ba91d26d527703fc91121d4d7ad5d1138f60b8f8e4d8cccd7dd99fc"} Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.432038 4904 scope.go:117] "RemoveContainer" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.432237 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.438436 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.458130 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.462225 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.462279 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.462296 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.462314 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.462327 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.462339 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk92g\" (UniqueName: \"kubernetes.io/projected/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-kube-api-access-pk92g\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.487175 4904 scope.go:117] "RemoveContainer" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.528540 4904 scope.go:117] "RemoveContainer" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.541998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-config-data" (OuterVolumeSpecName: "config-data") pod "88ea509c-baf0-4c78-9bab-3ece4c8e28f3" (UID: "88ea509c-baf0-4c78-9bab-3ece4c8e28f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.564924 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88ea509c-baf0-4c78-9bab-3ece4c8e28f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.668644 4904 scope.go:117] "RemoveContainer" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.696197 4904 scope.go:117] "RemoveContainer" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.696892 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": container with ID starting with 56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b not found: ID does not exist" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.696940 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b"} err="failed to get container status \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": rpc error: code = NotFound desc = could not find container \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": container with ID starting with 56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.696966 4904 scope.go:117] "RemoveContainer" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.697876 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": container with ID starting with 36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5 not found: ID does not exist" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.697909 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5"} err="failed to get container status \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": rpc error: code = NotFound desc = could not find container \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": container with ID starting with 36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.697926 4904 scope.go:117] "RemoveContainer" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.698287 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": container with ID starting with 2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c not found: ID does not exist" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.698340 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c"} err="failed to get container status \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": rpc error: code = NotFound desc = could not find container \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": container with ID starting with 2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.698372 4904 scope.go:117] "RemoveContainer" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.698686 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": container with ID starting with 9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414 not found: ID does not exist" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.698722 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414"} err="failed to get container status \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": rpc error: code = NotFound desc = could not find container \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": container with ID starting with 9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.698744 4904 scope.go:117] "RemoveContainer" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.699296 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b"} err="failed to get container status \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": rpc error: code = NotFound desc = could not find container \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": container with ID starting with 56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.699325 4904 scope.go:117] "RemoveContainer" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.699835 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5"} err="failed to get container status \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": rpc error: code = NotFound desc = could not find container \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": container with ID starting with 36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.699881 4904 scope.go:117] "RemoveContainer" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.700233 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c"} err="failed to get container status \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": rpc error: code = NotFound desc = could not find container \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": container with ID starting with 2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.700251 4904 scope.go:117] "RemoveContainer" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.700617 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414"} err="failed to get container status \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": rpc error: code = NotFound desc = could not find container \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": container with ID starting with 9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.701011 4904 scope.go:117] "RemoveContainer" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.701399 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b"} err="failed to get container status \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": rpc error: code = NotFound desc = could not find container \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": container with ID starting with 56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.701422 4904 scope.go:117] "RemoveContainer" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.701647 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5"} err="failed to get container status \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": rpc error: code = NotFound desc = could not find container \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": container with ID starting with 36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.701699 4904 scope.go:117] "RemoveContainer" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.702041 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c"} err="failed to get container status \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": rpc error: code = NotFound desc = could not find container \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": container with ID starting with 2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.702070 4904 scope.go:117] "RemoveContainer" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.702753 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414"} err="failed to get container status \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": rpc error: code = NotFound desc = could not find container \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": container with ID starting with 9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.702776 4904 scope.go:117] "RemoveContainer" containerID="56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.703340 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b"} err="failed to get container status \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": rpc error: code = NotFound desc = could not find container \"56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b\": container with ID starting with 56e3d09687c4bfacf961633c4a2756a46f44c7585a9024389374ce72cf67691b not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.703359 4904 scope.go:117] "RemoveContainer" containerID="36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.703749 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5"} err="failed to get container status \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": rpc error: code = NotFound desc = could not find container \"36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5\": container with ID starting with 36bb91d0455389153ae0420b9c626932cb0ba56e20309139dc13194882de31d5 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.703771 4904 scope.go:117] "RemoveContainer" containerID="2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.704091 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c"} err="failed to get container status \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": rpc error: code = NotFound desc = could not find container \"2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c\": container with ID starting with 2b8be982b31e191f021e4fce63f5f2689d6bd249fd1e52ab0307c31bbe44540c not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.704111 4904 scope.go:117] "RemoveContainer" containerID="9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.704360 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414"} err="failed to get container status \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": rpc error: code = NotFound desc = could not find container \"9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414\": container with ID starting with 9086939d11e0ca223448ead965ac557bc9d9c87310b75614a65892ccdc6aa414 not found: ID does not exist" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.782548 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.797680 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.820047 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.820686 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-notification-agent" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.820704 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-notification-agent" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.820737 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-central-agent" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.820746 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-central-agent" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.820778 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="proxy-httpd" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.820784 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="proxy-httpd" Nov 21 13:59:54 crc kubenswrapper[4904]: E1121 13:59:54.820799 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="sg-core" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.820805 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="sg-core" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.821040 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-notification-agent" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.821063 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="proxy-httpd" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.821074 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="sg-core" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.821094 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" containerName="ceilometer-central-agent" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.830150 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.839873 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.840245 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.840437 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.840590 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874144 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874211 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874245 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874306 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-log-httpd\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874328 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-config-data\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874345 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-scripts\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874369 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcm9\" (UniqueName: \"kubernetes.io/projected/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-kube-api-access-hxcm9\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.874405 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-run-httpd\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976756 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976802 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976827 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976878 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-log-httpd\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976898 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-config-data\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976924 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-scripts\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.976950 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcm9\" (UniqueName: \"kubernetes.io/projected/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-kube-api-access-hxcm9\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.977107 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-run-httpd\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.977637 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-log-httpd\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.977818 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-run-httpd\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.981958 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.983382 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.985171 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-config-data\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.991490 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-scripts\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:54 crc kubenswrapper[4904]: I1121 13:59:54.999053 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:55 crc kubenswrapper[4904]: I1121 13:59:55.002788 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcm9\" (UniqueName: \"kubernetes.io/projected/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-kube-api-access-hxcm9\") pod \"ceilometer-0\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " pod="openstack/ceilometer-0" Nov 21 13:59:55 crc kubenswrapper[4904]: I1121 13:59:55.163108 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 13:59:55 crc kubenswrapper[4904]: I1121 13:59:55.328098 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="rabbitmq" containerID="cri-o://58ad7f87df0c5b3de290c7d50f4cc86599032e2538a3c0da8ac6d5edb4326ef4" gracePeriod=604795 Nov 21 13:59:55 crc kubenswrapper[4904]: I1121 13:59:55.743073 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 13:59:56 crc kubenswrapper[4904]: I1121 13:59:56.360998 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="rabbitmq" containerID="cri-o://19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8" gracePeriod=604795 Nov 21 13:59:56 crc kubenswrapper[4904]: I1121 13:59:56.475356 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerStarted","Data":"551c3892e8c876c67e9b234f9aadd79d90fa24fc0d1dc6fa1820f3cc5b9b7d43"} Nov 21 13:59:56 crc kubenswrapper[4904]: I1121 13:59:56.523481 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 13:59:56 crc kubenswrapper[4904]: E1121 13:59:56.523993 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 13:59:56 crc kubenswrapper[4904]: I1121 13:59:56.535606 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ea509c-baf0-4c78-9bab-3ece4c8e28f3" path="/var/lib/kubelet/pods/88ea509c-baf0-4c78-9bab-3ece4c8e28f3/volumes" Nov 21 13:59:58 crc kubenswrapper[4904]: I1121 13:59:58.661504 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.122:5671: connect: connection refused" Nov 21 13:59:58 crc kubenswrapper[4904]: I1121 13:59:58.739405 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.123:5671: connect: connection refused" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.164428 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz"] Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.166441 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.179009 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz"] Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.204078 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.205213 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.254274 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19b62d5-1edd-42a2-b023-9f7f4e71c368-secret-volume\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.254365 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19b62d5-1edd-42a2-b023-9f7f4e71c368-config-volume\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.254397 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmds9\" (UniqueName: \"kubernetes.io/projected/f19b62d5-1edd-42a2-b023-9f7f4e71c368-kube-api-access-zmds9\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.356585 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19b62d5-1edd-42a2-b023-9f7f4e71c368-secret-volume\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.356753 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19b62d5-1edd-42a2-b023-9f7f4e71c368-config-volume\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.356815 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmds9\" (UniqueName: \"kubernetes.io/projected/f19b62d5-1edd-42a2-b023-9f7f4e71c368-kube-api-access-zmds9\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.358252 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19b62d5-1edd-42a2-b023-9f7f4e71c368-config-volume\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.376955 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmds9\" (UniqueName: \"kubernetes.io/projected/f19b62d5-1edd-42a2-b023-9f7f4e71c368-kube-api-access-zmds9\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.382969 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19b62d5-1edd-42a2-b023-9f7f4e71c368-secret-volume\") pod \"collect-profiles-29395560-gpdbz\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:00 crc kubenswrapper[4904]: I1121 14:00:00.530300 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:02 crc kubenswrapper[4904]: I1121 14:00:02.562497 4904 generic.go:334] "Generic (PLEG): container finished" podID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerID="58ad7f87df0c5b3de290c7d50f4cc86599032e2538a3c0da8ac6d5edb4326ef4" exitCode=0 Nov 21 14:00:02 crc kubenswrapper[4904]: I1121 14:00:02.562587 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f72ba976-6eb5-4886-81fc-3f7e4563039d","Type":"ContainerDied","Data":"58ad7f87df0c5b3de290c7d50f4cc86599032e2538a3c0da8ac6d5edb4326ef4"} Nov 21 14:00:03 crc kubenswrapper[4904]: I1121 14:00:03.581206 4904 generic.go:334] "Generic (PLEG): container finished" podID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerID="19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8" exitCode=0 Nov 21 14:00:03 crc kubenswrapper[4904]: I1121 14:00:03.581286 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"914527f1-8202-44fc-bbbb-4c39cf793a7b","Type":"ContainerDied","Data":"19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8"} Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.496549 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-g52dw"] Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.502842 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.511086 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.543345 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-g52dw"] Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.648903 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-config\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.649129 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5qxr\" (UniqueName: \"kubernetes.io/projected/d6283671-4b23-4a54-a7bb-6e40619d5ea9-kube-api-access-q5qxr\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.649155 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.649172 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.649255 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.649278 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.649310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751480 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-config\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751583 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5qxr\" (UniqueName: \"kubernetes.io/projected/d6283671-4b23-4a54-a7bb-6e40619d5ea9-kube-api-access-q5qxr\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751622 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751821 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.751874 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.752452 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.753025 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.755752 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.757824 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-config\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.761355 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.761417 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.803482 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5qxr\" (UniqueName: \"kubernetes.io/projected/d6283671-4b23-4a54-a7bb-6e40619d5ea9-kube-api-access-q5qxr\") pod \"dnsmasq-dns-7d84b4d45c-g52dw\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:06 crc kubenswrapper[4904]: I1121 14:00:06.870280 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.192551 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.202027 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.391883 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-config-data\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392446 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqgmp\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-kube-api-access-pqgmp\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392518 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-server-conf\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392551 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-tls\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392589 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-erlang-cookie\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392612 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914527f1-8202-44fc-bbbb-4c39cf793a7b-erlang-cookie-secret\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392647 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-server-conf\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392692 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f72ba976-6eb5-4886-81fc-3f7e4563039d-erlang-cookie-secret\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392718 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-config-data\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392750 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-plugins\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392832 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-plugins-conf\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392958 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.392988 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-confd\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393023 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-confd\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-tls\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393066 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914527f1-8202-44fc-bbbb-4c39cf793a7b-pod-info\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393109 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f72ba976-6eb5-4886-81fc-3f7e4563039d-pod-info\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393190 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-erlang-cookie\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393240 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-plugins-conf\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393280 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393315 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-plugins\") pod \"914527f1-8202-44fc-bbbb-4c39cf793a7b\" (UID: \"914527f1-8202-44fc-bbbb-4c39cf793a7b\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.393395 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqpxt\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-kube-api-access-jqpxt\") pod \"f72ba976-6eb5-4886-81fc-3f7e4563039d\" (UID: \"f72ba976-6eb5-4886-81fc-3f7e4563039d\") " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.398623 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.401241 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.402748 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.404321 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.407635 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f72ba976-6eb5-4886-81fc-3f7e4563039d-pod-info" (OuterVolumeSpecName: "pod-info") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.408333 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914527f1-8202-44fc-bbbb-4c39cf793a7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.409502 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.409636 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.409912 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.411828 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.419130 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/914527f1-8202-44fc-bbbb-4c39cf793a7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.419908 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-kube-api-access-pqgmp" (OuterVolumeSpecName: "kube-api-access-pqgmp") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "kube-api-access-pqgmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.424481 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.425204 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f72ba976-6eb5-4886-81fc-3f7e4563039d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.426344 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-kube-api-access-jqpxt" (OuterVolumeSpecName: "kube-api-access-jqpxt") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "kube-api-access-jqpxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.436809 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.498259 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-config-data" (OuterVolumeSpecName: "config-data") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500021 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqpxt\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-kube-api-access-jqpxt\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500084 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500098 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqgmp\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-kube-api-access-pqgmp\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500111 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500125 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500139 4904 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/914527f1-8202-44fc-bbbb-4c39cf793a7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500151 4904 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f72ba976-6eb5-4886-81fc-3f7e4563039d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500163 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500174 4904 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.500625 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.501859 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.501916 4904 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/914527f1-8202-44fc-bbbb-4c39cf793a7b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.501930 4904 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f72ba976-6eb5-4886-81fc-3f7e4563039d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.501961 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.501975 4904 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.502013 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.502061 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.502572 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-config-data" (OuterVolumeSpecName: "config-data") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.504516 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-server-conf" (OuterVolumeSpecName: "server-conf") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.531054 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.553563 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.562481 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.606857 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.607166 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.607252 4904 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.607327 4904 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/914527f1-8202-44fc-bbbb-4c39cf793a7b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.607398 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f72ba976-6eb5-4886-81fc-3f7e4563039d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.630384 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "914527f1-8202-44fc-bbbb-4c39cf793a7b" (UID: "914527f1-8202-44fc-bbbb-4c39cf793a7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.639886 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f72ba976-6eb5-4886-81fc-3f7e4563039d" (UID: "f72ba976-6eb5-4886-81fc-3f7e4563039d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.674270 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f72ba976-6eb5-4886-81fc-3f7e4563039d","Type":"ContainerDied","Data":"7cbc9e21e77fdc0e9f00cc22cc9dc0f06db02b3787df31a9747e98b7a9834097"} Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.674571 4904 scope.go:117] "RemoveContainer" containerID="58ad7f87df0c5b3de290c7d50f4cc86599032e2538a3c0da8ac6d5edb4326ef4" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.674330 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.681262 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"914527f1-8202-44fc-bbbb-4c39cf793a7b","Type":"ContainerDied","Data":"12d9fbb9e3377b9700d1dd4ea9d4ef575822eb9f775698eb9203b4173232edb6"} Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.681389 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.722052 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f72ba976-6eb5-4886-81fc-3f7e4563039d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.722081 4904 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/914527f1-8202-44fc-bbbb-4c39cf793a7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.758289 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.778797 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.792857 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.806852 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.836016 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: E1121 14:00:10.836676 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="rabbitmq" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.836699 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="rabbitmq" Nov 21 14:00:10 crc kubenswrapper[4904]: E1121 14:00:10.836715 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="setup-container" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.836722 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="setup-container" Nov 21 14:00:10 crc kubenswrapper[4904]: E1121 14:00:10.836740 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="setup-container" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.836748 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="setup-container" Nov 21 14:00:10 crc kubenswrapper[4904]: E1121 14:00:10.836789 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="rabbitmq" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.836797 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="rabbitmq" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.837049 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="rabbitmq" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.837073 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="rabbitmq" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.838774 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.843080 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.850909 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.851139 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.851264 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.851477 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.851566 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m98jh" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.851708 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.874423 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.893153 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.897220 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.901098 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.901270 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.901346 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.901931 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.902134 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pfqw8" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.902335 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.904248 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 21 14:00:10 crc kubenswrapper[4904]: I1121 14:00:10.909369 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036622 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkr7\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-kube-api-access-prkr7\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036721 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036767 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036799 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-config-data\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036926 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdcbae10-10ee-4213-8758-ce56fbe6a27e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036948 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.036978 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037006 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdcbae10-10ee-4213-8758-ce56fbe6a27e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037231 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037280 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037422 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037477 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037501 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ld4x\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-kube-api-access-4ld4x\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037691 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037713 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037739 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037776 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037802 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037855 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037875 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.037994 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.140988 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdcbae10-10ee-4213-8758-ce56fbe6a27e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141075 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141118 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141162 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdcbae10-10ee-4213-8758-ce56fbe6a27e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141206 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141249 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141325 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141344 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141390 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ld4x\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-kube-api-access-4ld4x\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141427 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141445 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141487 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141512 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141555 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141579 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141598 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141649 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141741 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prkr7\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-kube-api-access-prkr7\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141790 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141821 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141874 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-config-data\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.141904 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.144325 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.145247 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.148574 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fdcbae10-10ee-4213-8758-ce56fbe6a27e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.148590 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.148864 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.148864 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.149682 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.154533 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.156073 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.157188 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.157407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.157484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.159416 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.160443 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.160747 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdcbae10-10ee-4213-8758-ce56fbe6a27e-config-data\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.166036 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.167197 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.167395 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fdcbae10-10ee-4213-8758-ce56fbe6a27e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.167533 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fdcbae10-10ee-4213-8758-ce56fbe6a27e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.169407 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.170818 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ld4x\" (UniqueName: \"kubernetes.io/projected/8b1f6e46-f0d4-421a-bb86-48f1d622cd97-kube-api-access-4ld4x\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.172443 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prkr7\" (UniqueName: \"kubernetes.io/projected/fdcbae10-10ee-4213-8758-ce56fbe6a27e-kube-api-access-prkr7\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.207949 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b1f6e46-f0d4-421a-bb86-48f1d622cd97\") " pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.244557 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"fdcbae10-10ee-4213-8758-ce56fbe6a27e\") " pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.295191 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.478859 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:11 crc kubenswrapper[4904]: I1121 14:00:11.515427 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:00:11 crc kubenswrapper[4904]: E1121 14:00:11.516927 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:00:12 crc kubenswrapper[4904]: I1121 14:00:12.530128 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" path="/var/lib/kubelet/pods/914527f1-8202-44fc-bbbb-4c39cf793a7b/volumes" Nov 21 14:00:12 crc kubenswrapper[4904]: I1121 14:00:12.637172 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" path="/var/lib/kubelet/pods/f72ba976-6eb5-4886-81fc-3f7e4563039d/volumes" Nov 21 14:00:13 crc kubenswrapper[4904]: I1121 14:00:13.661585 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f72ba976-6eb5-4886-81fc-3f7e4563039d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.122:5671: i/o timeout" Nov 21 14:00:13 crc kubenswrapper[4904]: I1121 14:00:13.738979 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="914527f1-8202-44fc-bbbb-4c39cf793a7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.123:5671: i/o timeout" Nov 21 14:00:18 crc kubenswrapper[4904]: I1121 14:00:18.647451 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz"] Nov 21 14:00:19 crc kubenswrapper[4904]: I1121 14:00:19.484195 4904 scope.go:117] "RemoveContainer" containerID="19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8" Nov 21 14:00:20 crc kubenswrapper[4904]: E1121 14:00:20.445308 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 21 14:00:20 crc kubenswrapper[4904]: E1121 14:00:20.445824 4904 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 21 14:00:20 crc kubenswrapper[4904]: E1121 14:00:20.445973 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztbcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-td4l8_openstack(a6cc886d-3403-4dab-82a8-35aacd9e2bc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 14:00:20 crc kubenswrapper[4904]: E1121 14:00:20.447541 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-td4l8" podUID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" Nov 21 14:00:20 crc kubenswrapper[4904]: E1121 14:00:20.830765 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-td4l8" podUID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.263931 4904 scope.go:117] "RemoveContainer" containerID="3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4" Nov 21 14:00:21 crc kubenswrapper[4904]: W1121 14:00:21.339478 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19b62d5_1edd_42a2_b023_9f7f4e71c368.slice/crio-6ae351e5a19280866ac261affdc428ce20628aacd52ceae8ef989d1eacb426ed WatchSource:0}: Error finding container 6ae351e5a19280866ac261affdc428ce20628aacd52ceae8ef989d1eacb426ed: Status 404 returned error can't find the container with id 6ae351e5a19280866ac261affdc428ce20628aacd52ceae8ef989d1eacb426ed Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.390506 4904 scope.go:117] "RemoveContainer" containerID="3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.424880 4904 scope.go:117] "RemoveContainer" containerID="9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177" Nov 21 14:00:21 crc kubenswrapper[4904]: E1121 14:00:21.428845 4904 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-server-0_openstack_f72ba976-6eb5-4886-81fc-3f7e4563039d_0 in pod sandbox 7cbc9e21e77fdc0e9f00cc22cc9dc0f06db02b3787df31a9747e98b7a9834097 from index: no such id: '3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4'" containerID="3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.428932 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4"} err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-server-0_openstack_f72ba976-6eb5-4886-81fc-3f7e4563039d_0 in pod sandbox 7cbc9e21e77fdc0e9f00cc22cc9dc0f06db02b3787df31a9747e98b7a9834097 from index: no such id: '3d24a09989c1629c1b60f48e3b57ce612782a7bdda8c5a8f1a687e2f40e29fc4'" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.428988 4904 scope.go:117] "RemoveContainer" containerID="19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8" Nov 21 14:00:21 crc kubenswrapper[4904]: E1121 14:00:21.430605 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8\": container with ID starting with 19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8 not found: ID does not exist" containerID="19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.430636 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8"} err="failed to get container status \"19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8\": rpc error: code = NotFound desc = could not find container \"19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8\": container with ID starting with 19eed143bc4267fe7448b8288666967fa6402a55eb89cf7eeb2c1a8e808e1fb8 not found: ID does not exist" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.430677 4904 scope.go:117] "RemoveContainer" containerID="9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177" Nov 21 14:00:21 crc kubenswrapper[4904]: E1121 14:00:21.496704 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 21 14:00:21 crc kubenswrapper[4904]: E1121 14:00:21.496769 4904 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 21 14:00:21 crc kubenswrapper[4904]: E1121 14:00:21.497863 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h59bh569h5dfh5ffh655h549h598h57ch559hbch577h588hc9h79h5dfh678h597h54bh558h58bh5c9h7bh5b6h655h55chd7h577h6dhc9h5c6h5d7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hxcm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(5f974c52-ff88-48b3-b3c4-fb2bca5201fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 14:00:21 crc kubenswrapper[4904]: E1121 14:00:21.618828 4904 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-cell1-server-0_openstack_914527f1-8202-44fc-bbbb-4c39cf793a7b_0 in pod sandbox 12d9fbb9e3377b9700d1dd4ea9d4ef575822eb9f775698eb9203b4173232edb6: identifier is not a container" containerID="9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.619322 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5e43f6d2586379e718657832b561d8b0d6befa987673226ae450ccc8226177"} err="rpc error: code = Unknown desc = failed to delete container k8s_setup-container_rabbitmq-cell1-server-0_openstack_914527f1-8202-44fc-bbbb-4c39cf793a7b_0 in pod sandbox 12d9fbb9e3377b9700d1dd4ea9d4ef575822eb9f775698eb9203b4173232edb6: identifier is not a container" Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.861373 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" event={"ID":"f19b62d5-1edd-42a2-b023-9f7f4e71c368","Type":"ContainerStarted","Data":"6ae351e5a19280866ac261affdc428ce20628aacd52ceae8ef989d1eacb426ed"} Nov 21 14:00:21 crc kubenswrapper[4904]: I1121 14:00:21.932413 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-g52dw"] Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.077638 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 21 14:00:22 crc kubenswrapper[4904]: W1121 14:00:22.129129 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6283671_4b23_4a54_a7bb_6e40619d5ea9.slice/crio-eac4d15c1d0ab5641c81e32b4b649d67068bc6a344a4c53e9451c2c28fc6ac0f WatchSource:0}: Error finding container eac4d15c1d0ab5641c81e32b4b649d67068bc6a344a4c53e9451c2c28fc6ac0f: Status 404 returned error can't find the container with id eac4d15c1d0ab5641c81e32b4b649d67068bc6a344a4c53e9451c2c28fc6ac0f Nov 21 14:00:22 crc kubenswrapper[4904]: W1121 14:00:22.140277 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdcbae10_10ee_4213_8758_ce56fbe6a27e.slice/crio-494352c994eff204f3cce56d909554b7783799c634fd4500b23a5a3702e0d502 WatchSource:0}: Error finding container 494352c994eff204f3cce56d909554b7783799c634fd4500b23a5a3702e0d502: Status 404 returned error can't find the container with id 494352c994eff204f3cce56d909554b7783799c634fd4500b23a5a3702e0d502 Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.208988 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.875673 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b1f6e46-f0d4-421a-bb86-48f1d622cd97","Type":"ContainerStarted","Data":"3ee503aad2198c07e4c40850631748244d28b5235799d3882037c2b7d5173811"} Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.879252 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fdcbae10-10ee-4213-8758-ce56fbe6a27e","Type":"ContainerStarted","Data":"494352c994eff204f3cce56d909554b7783799c634fd4500b23a5a3702e0d502"} Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.881313 4904 generic.go:334] "Generic (PLEG): container finished" podID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerID="304a99c8c39d4dcf0aad03daee2fa3f8f5a0e9640fd50d162bcc018ac8bdced2" exitCode=0 Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.881392 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" event={"ID":"d6283671-4b23-4a54-a7bb-6e40619d5ea9","Type":"ContainerDied","Data":"304a99c8c39d4dcf0aad03daee2fa3f8f5a0e9640fd50d162bcc018ac8bdced2"} Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.881414 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" event={"ID":"d6283671-4b23-4a54-a7bb-6e40619d5ea9","Type":"ContainerStarted","Data":"eac4d15c1d0ab5641c81e32b4b649d67068bc6a344a4c53e9451c2c28fc6ac0f"} Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.884734 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerStarted","Data":"44fd4c3d9b1d74a1133573bf946c11085e57c27226829507075ba495bc07c644"} Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.899878 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" event={"ID":"f19b62d5-1edd-42a2-b023-9f7f4e71c368","Type":"ContainerDied","Data":"0030bffab16f6c7e1a5bf21529ffa033da7332d95454adcc04fcae4d791fea82"} Nov 21 14:00:22 crc kubenswrapper[4904]: I1121 14:00:22.899986 4904 generic.go:334] "Generic (PLEG): container finished" podID="f19b62d5-1edd-42a2-b023-9f7f4e71c368" containerID="0030bffab16f6c7e1a5bf21529ffa033da7332d95454adcc04fcae4d791fea82" exitCode=0 Nov 21 14:00:23 crc kubenswrapper[4904]: I1121 14:00:23.514625 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:00:23 crc kubenswrapper[4904]: E1121 14:00:23.515195 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:00:23 crc kubenswrapper[4904]: I1121 14:00:23.951101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" event={"ID":"d6283671-4b23-4a54-a7bb-6e40619d5ea9","Type":"ContainerStarted","Data":"90cbc0c6cbd9d5b400d47960e29cb3e0db607af1db945b9eb71b9b7a7da29240"} Nov 21 14:00:23 crc kubenswrapper[4904]: I1121 14:00:23.951969 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:23 crc kubenswrapper[4904]: I1121 14:00:23.955191 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerStarted","Data":"541a429f6b17d86b76d18a5133956d6ece21a0fd68eac103961356ba320e88d2"} Nov 21 14:00:23 crc kubenswrapper[4904]: I1121 14:00:23.985516 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" podStartSLOduration=17.985491255 podStartE2EDuration="17.985491255s" podCreationTimestamp="2025-11-21 14:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:23.984110962 +0000 UTC m=+1698.105643524" watchObservedRunningTime="2025-11-21 14:00:23.985491255 +0000 UTC m=+1698.107023817" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.719304 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.832694 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19b62d5-1edd-42a2-b023-9f7f4e71c368-secret-volume\") pod \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.833096 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmds9\" (UniqueName: \"kubernetes.io/projected/f19b62d5-1edd-42a2-b023-9f7f4e71c368-kube-api-access-zmds9\") pod \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.833130 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19b62d5-1edd-42a2-b023-9f7f4e71c368-config-volume\") pod \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\" (UID: \"f19b62d5-1edd-42a2-b023-9f7f4e71c368\") " Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.834412 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19b62d5-1edd-42a2-b023-9f7f4e71c368-config-volume" (OuterVolumeSpecName: "config-volume") pod "f19b62d5-1edd-42a2-b023-9f7f4e71c368" (UID: "f19b62d5-1edd-42a2-b023-9f7f4e71c368"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.840249 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19b62d5-1edd-42a2-b023-9f7f4e71c368-kube-api-access-zmds9" (OuterVolumeSpecName: "kube-api-access-zmds9") pod "f19b62d5-1edd-42a2-b023-9f7f4e71c368" (UID: "f19b62d5-1edd-42a2-b023-9f7f4e71c368"). InnerVolumeSpecName "kube-api-access-zmds9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.840710 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19b62d5-1edd-42a2-b023-9f7f4e71c368-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f19b62d5-1edd-42a2-b023-9f7f4e71c368" (UID: "f19b62d5-1edd-42a2-b023-9f7f4e71c368"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.936635 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19b62d5-1edd-42a2-b023-9f7f4e71c368-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.936700 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmds9\" (UniqueName: \"kubernetes.io/projected/f19b62d5-1edd-42a2-b023-9f7f4e71c368-kube-api-access-zmds9\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.936712 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19b62d5-1edd-42a2-b023-9f7f4e71c368-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.971329 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fdcbae10-10ee-4213-8758-ce56fbe6a27e","Type":"ContainerStarted","Data":"6d09a8db82a01e69368ac18babb5382fc21c1f1af9025bf1a04042033691e901"} Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.973105 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" event={"ID":"f19b62d5-1edd-42a2-b023-9f7f4e71c368","Type":"ContainerDied","Data":"6ae351e5a19280866ac261affdc428ce20628aacd52ceae8ef989d1eacb426ed"} Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.973125 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.973141 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae351e5a19280866ac261affdc428ce20628aacd52ceae8ef989d1eacb426ed" Nov 21 14:00:24 crc kubenswrapper[4904]: I1121 14:00:24.988234 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b1f6e46-f0d4-421a-bb86-48f1d622cd97","Type":"ContainerStarted","Data":"5e068ded51b1fc80e9eb221543458bbb23916ecee5a97117b15a7ff43582e730"} Nov 21 14:00:25 crc kubenswrapper[4904]: E1121 14:00:25.314680 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" Nov 21 14:00:26 crc kubenswrapper[4904]: I1121 14:00:26.005188 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerStarted","Data":"d9f1e4693f5d7fed3be2bb117adb9b5711be39b1f55297d212b3a2765e461af4"} Nov 21 14:00:26 crc kubenswrapper[4904]: E1121 14:00:26.009112 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" Nov 21 14:00:27 crc kubenswrapper[4904]: I1121 14:00:27.015437 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 14:00:27 crc kubenswrapper[4904]: E1121 14:00:27.019152 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" Nov 21 14:00:28 crc kubenswrapper[4904]: E1121 14:00:28.028323 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" Nov 21 14:00:31 crc kubenswrapper[4904]: I1121 14:00:31.872605 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:31 crc kubenswrapper[4904]: I1121 14:00:31.985971 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-hchfr"] Nov 21 14:00:31 crc kubenswrapper[4904]: I1121 14:00:31.986441 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerName="dnsmasq-dns" containerID="cri-o://14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b" gracePeriod=10 Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.159189 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-wh2vm"] Nov 21 14:00:32 crc kubenswrapper[4904]: E1121 14:00:32.164032 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19b62d5-1edd-42a2-b023-9f7f4e71c368" containerName="collect-profiles" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.164088 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19b62d5-1edd-42a2-b023-9f7f4e71c368" containerName="collect-profiles" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.164636 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19b62d5-1edd-42a2-b023-9f7f4e71c368" containerName="collect-profiles" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.166382 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.171037 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-wh2vm"] Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270427 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-config\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270560 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270630 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-nb\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-sb\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270734 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2ntm\" (UniqueName: \"kubernetes.io/projected/da697e71-847e-48aa-8210-3974862d9deb-kube-api-access-k2ntm\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270863 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-svc\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.270935 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-swift-storage-0\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.374262 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.372997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-openstack-edpm-ipam\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.376155 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-nb\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.376219 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-sb\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.376242 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2ntm\" (UniqueName: \"kubernetes.io/projected/da697e71-847e-48aa-8210-3974862d9deb-kube-api-access-k2ntm\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.376421 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-svc\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.376512 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-swift-storage-0\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.376581 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-config\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.377002 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-nb\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.377272 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-svc\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.377375 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-sb\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.377759 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-swift-storage-0\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.377849 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-config\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.407617 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2ntm\" (UniqueName: \"kubernetes.io/projected/da697e71-847e-48aa-8210-3974862d9deb-kube-api-access-k2ntm\") pod \"dnsmasq-dns-6559847fc9-wh2vm\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.497438 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.711402 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.794483 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-sb\") pod \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.795102 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-config\") pod \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.795174 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-swift-storage-0\") pod \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.795194 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76rfh\" (UniqueName: \"kubernetes.io/projected/b45512b9-5eee-49ce-b2f4-f70fee0312d2-kube-api-access-76rfh\") pod \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.795301 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-svc\") pod \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.795358 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-nb\") pod \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\" (UID: \"b45512b9-5eee-49ce-b2f4-f70fee0312d2\") " Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.802882 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45512b9-5eee-49ce-b2f4-f70fee0312d2-kube-api-access-76rfh" (OuterVolumeSpecName: "kube-api-access-76rfh") pod "b45512b9-5eee-49ce-b2f4-f70fee0312d2" (UID: "b45512b9-5eee-49ce-b2f4-f70fee0312d2"). InnerVolumeSpecName "kube-api-access-76rfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.860278 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-config" (OuterVolumeSpecName: "config") pod "b45512b9-5eee-49ce-b2f4-f70fee0312d2" (UID: "b45512b9-5eee-49ce-b2f4-f70fee0312d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.864850 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b45512b9-5eee-49ce-b2f4-f70fee0312d2" (UID: "b45512b9-5eee-49ce-b2f4-f70fee0312d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.875044 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b45512b9-5eee-49ce-b2f4-f70fee0312d2" (UID: "b45512b9-5eee-49ce-b2f4-f70fee0312d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.881826 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b45512b9-5eee-49ce-b2f4-f70fee0312d2" (UID: "b45512b9-5eee-49ce-b2f4-f70fee0312d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.890062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b45512b9-5eee-49ce-b2f4-f70fee0312d2" (UID: "b45512b9-5eee-49ce-b2f4-f70fee0312d2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.898114 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.898166 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.898181 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.898192 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-config\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.898205 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b45512b9-5eee-49ce-b2f4-f70fee0312d2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:32 crc kubenswrapper[4904]: I1121 14:00:32.898220 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76rfh\" (UniqueName: \"kubernetes.io/projected/b45512b9-5eee-49ce-b2f4-f70fee0312d2-kube-api-access-76rfh\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:33 crc kubenswrapper[4904]: W1121 14:00:33.080034 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda697e71_847e_48aa_8210_3974862d9deb.slice/crio-e68d040f8255c3cff70e88e4acd5413ee5e12fb9f30aa42954bfbf9b898aa1bf WatchSource:0}: Error finding container e68d040f8255c3cff70e88e4acd5413ee5e12fb9f30aa42954bfbf9b898aa1bf: Status 404 returned error can't find the container with id e68d040f8255c3cff70e88e4acd5413ee5e12fb9f30aa42954bfbf9b898aa1bf Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.082529 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-wh2vm"] Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.090567 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" event={"ID":"da697e71-847e-48aa-8210-3974862d9deb","Type":"ContainerStarted","Data":"e68d040f8255c3cff70e88e4acd5413ee5e12fb9f30aa42954bfbf9b898aa1bf"} Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.095523 4904 generic.go:334] "Generic (PLEG): container finished" podID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerID="14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b" exitCode=0 Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.095572 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" event={"ID":"b45512b9-5eee-49ce-b2f4-f70fee0312d2","Type":"ContainerDied","Data":"14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b"} Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.095606 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" event={"ID":"b45512b9-5eee-49ce-b2f4-f70fee0312d2","Type":"ContainerDied","Data":"38f61bf7c0cf678f0cf197f3a651ef5b52c90f33c452f2585f14c84bc9c72733"} Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.095606 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-hchfr" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.095624 4904 scope.go:117] "RemoveContainer" containerID="14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.271319 4904 scope.go:117] "RemoveContainer" containerID="49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.313118 4904 scope.go:117] "RemoveContainer" containerID="14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b" Nov 21 14:00:33 crc kubenswrapper[4904]: E1121 14:00:33.313956 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b\": container with ID starting with 14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b not found: ID does not exist" containerID="14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.314022 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b"} err="failed to get container status \"14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b\": rpc error: code = NotFound desc = could not find container \"14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b\": container with ID starting with 14237a9f9879d95ca729d2f30dffb6dbc3922849932040aa9f129082988e427b not found: ID does not exist" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.314051 4904 scope.go:117] "RemoveContainer" containerID="49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa" Nov 21 14:00:33 crc kubenswrapper[4904]: E1121 14:00:33.314453 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa\": container with ID starting with 49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa not found: ID does not exist" containerID="49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.314478 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa"} err="failed to get container status \"49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa\": rpc error: code = NotFound desc = could not find container \"49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa\": container with ID starting with 49fea8b498e876b6f4a226771b3f3268b49dcdd91c164e66e83afd1b455d53fa not found: ID does not exist" Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.314894 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-hchfr"] Nov 21 14:00:33 crc kubenswrapper[4904]: I1121 14:00:33.326495 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-hchfr"] Nov 21 14:00:34 crc kubenswrapper[4904]: I1121 14:00:34.106888 4904 generic.go:334] "Generic (PLEG): container finished" podID="da697e71-847e-48aa-8210-3974862d9deb" containerID="cb764a2bac67a6184a682008df54717eeb6c2ab12bc669c554889f8bdfe38f92" exitCode=0 Nov 21 14:00:34 crc kubenswrapper[4904]: I1121 14:00:34.107258 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" event={"ID":"da697e71-847e-48aa-8210-3974862d9deb","Type":"ContainerDied","Data":"cb764a2bac67a6184a682008df54717eeb6c2ab12bc669c554889f8bdfe38f92"} Nov 21 14:00:34 crc kubenswrapper[4904]: I1121 14:00:34.529357 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" path="/var/lib/kubelet/pods/b45512b9-5eee-49ce-b2f4-f70fee0312d2/volumes" Nov 21 14:00:35 crc kubenswrapper[4904]: I1121 14:00:35.131095 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" event={"ID":"da697e71-847e-48aa-8210-3974862d9deb","Type":"ContainerStarted","Data":"a6208f9d9b7ca7542f9fb0dedbc74bee9651492fe0f31e1faa9925789fffc999"} Nov 21 14:00:35 crc kubenswrapper[4904]: I1121 14:00:35.131260 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:35 crc kubenswrapper[4904]: I1121 14:00:35.514004 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:00:35 crc kubenswrapper[4904]: E1121 14:00:35.514441 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:00:35 crc kubenswrapper[4904]: I1121 14:00:35.539626 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" podStartSLOduration=3.539592769 podStartE2EDuration="3.539592769s" podCreationTimestamp="2025-11-21 14:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:35.15882783 +0000 UTC m=+1709.280360382" watchObservedRunningTime="2025-11-21 14:00:35.539592769 +0000 UTC m=+1709.661125361" Nov 21 14:00:36 crc kubenswrapper[4904]: I1121 14:00:36.147234 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-td4l8" event={"ID":"a6cc886d-3403-4dab-82a8-35aacd9e2bc1","Type":"ContainerStarted","Data":"762d7daf2b31677c4454baad94d67ce502d86b6b6ccdaad7bfb655acb9820efa"} Nov 21 14:00:36 crc kubenswrapper[4904]: I1121 14:00:36.176564 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-td4l8" podStartSLOduration=1.836966603 podStartE2EDuration="48.176536304s" podCreationTimestamp="2025-11-21 13:59:48 +0000 UTC" firstStartedPulling="2025-11-21 13:59:49.438310326 +0000 UTC m=+1663.559842878" lastFinishedPulling="2025-11-21 14:00:35.777879977 +0000 UTC m=+1709.899412579" observedRunningTime="2025-11-21 14:00:36.165802881 +0000 UTC m=+1710.287335433" watchObservedRunningTime="2025-11-21 14:00:36.176536304 +0000 UTC m=+1710.298068866" Nov 21 14:00:39 crc kubenswrapper[4904]: I1121 14:00:39.190024 4904 generic.go:334] "Generic (PLEG): container finished" podID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" containerID="762d7daf2b31677c4454baad94d67ce502d86b6b6ccdaad7bfb655acb9820efa" exitCode=0 Nov 21 14:00:39 crc kubenswrapper[4904]: I1121 14:00:39.190132 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-td4l8" event={"ID":"a6cc886d-3403-4dab-82a8-35aacd9e2bc1","Type":"ContainerDied","Data":"762d7daf2b31677c4454baad94d67ce502d86b6b6ccdaad7bfb655acb9820efa"} Nov 21 14:00:40 crc kubenswrapper[4904]: I1121 14:00:40.768410 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-td4l8" Nov 21 14:00:40 crc kubenswrapper[4904]: I1121 14:00:40.912957 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-config-data\") pod \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " Nov 21 14:00:40 crc kubenswrapper[4904]: I1121 14:00:40.913138 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-combined-ca-bundle\") pod \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " Nov 21 14:00:40 crc kubenswrapper[4904]: I1121 14:00:40.913377 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbcb\" (UniqueName: \"kubernetes.io/projected/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-kube-api-access-ztbcb\") pod \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\" (UID: \"a6cc886d-3403-4dab-82a8-35aacd9e2bc1\") " Nov 21 14:00:40 crc kubenswrapper[4904]: I1121 14:00:40.920976 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-kube-api-access-ztbcb" (OuterVolumeSpecName: "kube-api-access-ztbcb") pod "a6cc886d-3403-4dab-82a8-35aacd9e2bc1" (UID: "a6cc886d-3403-4dab-82a8-35aacd9e2bc1"). InnerVolumeSpecName "kube-api-access-ztbcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:40 crc kubenswrapper[4904]: I1121 14:00:40.951463 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6cc886d-3403-4dab-82a8-35aacd9e2bc1" (UID: "a6cc886d-3403-4dab-82a8-35aacd9e2bc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.006491 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-config-data" (OuterVolumeSpecName: "config-data") pod "a6cc886d-3403-4dab-82a8-35aacd9e2bc1" (UID: "a6cc886d-3403-4dab-82a8-35aacd9e2bc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.016769 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.016795 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbcb\" (UniqueName: \"kubernetes.io/projected/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-kube-api-access-ztbcb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.016807 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6cc886d-3403-4dab-82a8-35aacd9e2bc1-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.221501 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-td4l8" event={"ID":"a6cc886d-3403-4dab-82a8-35aacd9e2bc1","Type":"ContainerDied","Data":"9934ec6645c770f719feef530e76c6deef14626f60da09c8856885cc1e264c54"} Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.222066 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9934ec6645c770f719feef530e76c6deef14626f60da09c8856885cc1e264c54" Nov 21 14:00:41 crc kubenswrapper[4904]: I1121 14:00:41.221641 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-td4l8" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.288825 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5bcc58b9d9-fdhrg"] Nov 21 14:00:42 crc kubenswrapper[4904]: E1121 14:00:42.289400 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerName="dnsmasq-dns" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.289421 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerName="dnsmasq-dns" Nov 21 14:00:42 crc kubenswrapper[4904]: E1121 14:00:42.289462 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerName="init" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.289471 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerName="init" Nov 21 14:00:42 crc kubenswrapper[4904]: E1121 14:00:42.289500 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" containerName="heat-db-sync" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.289508 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" containerName="heat-db-sync" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.289801 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" containerName="heat-db-sync" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.289837 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45512b9-5eee-49ce-b2f4-f70fee0312d2" containerName="dnsmasq-dns" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.290787 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.309196 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5bcc58b9d9-fdhrg"] Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.351513 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5bd7cc4489-fmh8v"] Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.353616 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.354299 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjsx\" (UniqueName: \"kubernetes.io/projected/c9c258f6-6c0c-4072-bd71-610209d2bbb9-kube-api-access-fxjsx\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.354502 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-config-data\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.354577 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-config-data-custom\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.355001 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-combined-ca-bundle\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.363312 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5bd7cc4489-fmh8v"] Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.452171 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7669f9847d-7bgvn"] Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.454547 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-config-data\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459749 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjsx\" (UniqueName: \"kubernetes.io/projected/c9c258f6-6c0c-4072-bd71-610209d2bbb9-kube-api-access-fxjsx\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459802 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctnd5\" (UniqueName: \"kubernetes.io/projected/da6b9694-5408-45ce-8e1c-3cf05c404837-kube-api-access-ctnd5\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459846 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-combined-ca-bundle\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-config-data\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459899 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-config-data-custom\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.459983 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-public-tls-certs\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.460019 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-internal-tls-certs\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.460043 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-config-data-custom\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.460083 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-combined-ca-bundle\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.477554 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-config-data-custom\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.484112 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7669f9847d-7bgvn"] Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.484145 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-config-data\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.490257 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjsx\" (UniqueName: \"kubernetes.io/projected/c9c258f6-6c0c-4072-bd71-610209d2bbb9-kube-api-access-fxjsx\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.500938 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.502853 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c258f6-6c0c-4072-bd71-610209d2bbb9-combined-ca-bundle\") pod \"heat-engine-5bcc58b9d9-fdhrg\" (UID: \"c9c258f6-6c0c-4072-bd71-610209d2bbb9\") " pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.564850 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-internal-tls-certs\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.564937 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-config-data-custom\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.564985 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-combined-ca-bundle\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565106 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctnd5\" (UniqueName: \"kubernetes.io/projected/da6b9694-5408-45ce-8e1c-3cf05c404837-kube-api-access-ctnd5\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565184 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-combined-ca-bundle\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565256 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbk2j\" (UniqueName: \"kubernetes.io/projected/55eba19c-92ed-4a40-81cb-9085d6becd76-kube-api-access-jbk2j\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565345 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-public-tls-certs\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-public-tls-certs\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-internal-tls-certs\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565604 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-config-data-custom\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565640 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-config-data\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.565749 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-config-data\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.583510 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-public-tls-certs\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.610489 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.619885 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-g52dw"] Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.620379 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerName="dnsmasq-dns" containerID="cri-o://90cbc0c6cbd9d5b400d47960e29cb3e0db607af1db945b9eb71b9b7a7da29240" gracePeriod=10 Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.621481 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-config-data-custom\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.621765 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-combined-ca-bundle\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.623223 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-config-data\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.637635 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctnd5\" (UniqueName: \"kubernetes.io/projected/da6b9694-5408-45ce-8e1c-3cf05c404837-kube-api-access-ctnd5\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.643932 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da6b9694-5408-45ce-8e1c-3cf05c404837-internal-tls-certs\") pod \"heat-api-5bd7cc4489-fmh8v\" (UID: \"da6b9694-5408-45ce-8e1c-3cf05c404837\") " pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.668964 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-internal-tls-certs\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.669032 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-config-data-custom\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.669076 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-combined-ca-bundle\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.669177 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbk2j\" (UniqueName: \"kubernetes.io/projected/55eba19c-92ed-4a40-81cb-9085d6becd76-kube-api-access-jbk2j\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.669256 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-public-tls-certs\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.669281 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-config-data\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.680221 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.688020 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-internal-tls-certs\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.688949 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-config-data\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.690617 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-combined-ca-bundle\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.692042 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-public-tls-certs\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.698223 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/55eba19c-92ed-4a40-81cb-9085d6becd76-config-data-custom\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.723349 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbk2j\" (UniqueName: \"kubernetes.io/projected/55eba19c-92ed-4a40-81cb-9085d6becd76-kube-api-access-jbk2j\") pod \"heat-cfnapi-7669f9847d-7bgvn\" (UID: \"55eba19c-92ed-4a40-81cb-9085d6becd76\") " pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:42 crc kubenswrapper[4904]: I1121 14:00:42.903917 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.251354 4904 generic.go:334] "Generic (PLEG): container finished" podID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerID="90cbc0c6cbd9d5b400d47960e29cb3e0db607af1db945b9eb71b9b7a7da29240" exitCode=0 Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.251471 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" event={"ID":"d6283671-4b23-4a54-a7bb-6e40619d5ea9","Type":"ContainerDied","Data":"90cbc0c6cbd9d5b400d47960e29cb3e0db607af1db945b9eb71b9b7a7da29240"} Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.256755 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.287493 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-nb\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.287629 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-swift-storage-0\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.287740 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-openstack-edpm-ipam\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.287788 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-svc\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.287820 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5qxr\" (UniqueName: \"kubernetes.io/projected/d6283671-4b23-4a54-a7bb-6e40619d5ea9-kube-api-access-q5qxr\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.287894 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-config\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.288110 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-sb\") pod \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\" (UID: \"d6283671-4b23-4a54-a7bb-6e40619d5ea9\") " Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.298432 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6283671-4b23-4a54-a7bb-6e40619d5ea9-kube-api-access-q5qxr" (OuterVolumeSpecName: "kube-api-access-q5qxr") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "kube-api-access-q5qxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.359969 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.364260 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-config" (OuterVolumeSpecName: "config") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.373911 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.373951 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.374862 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.375388 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6283671-4b23-4a54-a7bb-6e40619d5ea9" (UID: "d6283671-4b23-4a54-a7bb-6e40619d5ea9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.391967 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392004 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392016 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392027 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392038 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5qxr\" (UniqueName: \"kubernetes.io/projected/d6283671-4b23-4a54-a7bb-6e40619d5ea9-kube-api-access-q5qxr\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392048 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-config\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392055 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6283671-4b23-4a54-a7bb-6e40619d5ea9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.392618 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5bcc58b9d9-fdhrg"] Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.547094 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.613813 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7669f9847d-7bgvn"] Nov 21 14:00:43 crc kubenswrapper[4904]: W1121 14:00:43.616205 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda6b9694_5408_45ce_8e1c_3cf05c404837.slice/crio-9a07f3e71b153b1a1321151bdc725837d034bcf43888d147daf9732e5559ffa5 WatchSource:0}: Error finding container 9a07f3e71b153b1a1321151bdc725837d034bcf43888d147daf9732e5559ffa5: Status 404 returned error can't find the container with id 9a07f3e71b153b1a1321151bdc725837d034bcf43888d147daf9732e5559ffa5 Nov 21 14:00:43 crc kubenswrapper[4904]: I1121 14:00:43.644199 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5bd7cc4489-fmh8v"] Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.278398 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" event={"ID":"d6283671-4b23-4a54-a7bb-6e40619d5ea9","Type":"ContainerDied","Data":"eac4d15c1d0ab5641c81e32b4b649d67068bc6a344a4c53e9451c2c28fc6ac0f"} Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.278950 4904 scope.go:117] "RemoveContainer" containerID="90cbc0c6cbd9d5b400d47960e29cb3e0db607af1db945b9eb71b9b7a7da29240" Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.279218 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-g52dw" Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.285467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerStarted","Data":"b996f01dcb6130895e201da08f4b964506c7b516a81d278f9ee1c145dff75de4"} Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.295111 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" event={"ID":"c9c258f6-6c0c-4072-bd71-610209d2bbb9","Type":"ContainerStarted","Data":"e285be34240fdb7a800fd133123439762c5b3416ca4ca29f81501cf48603cdce"} Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.295173 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" event={"ID":"c9c258f6-6c0c-4072-bd71-610209d2bbb9","Type":"ContainerStarted","Data":"644aa2331a38fd941bb8ff575bdfd4161f357b65e6796dc796542cb1fea93b0a"} Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.295284 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.297275 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" event={"ID":"55eba19c-92ed-4a40-81cb-9085d6becd76","Type":"ContainerStarted","Data":"f51d40883e75f11d501bb98182826c9942cb15066688fb3ba1aa11dc81b52759"} Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.307550 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bd7cc4489-fmh8v" event={"ID":"da6b9694-5408-45ce-8e1c-3cf05c404837","Type":"ContainerStarted","Data":"9a07f3e71b153b1a1321151bdc725837d034bcf43888d147daf9732e5559ffa5"} Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.333047 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.370610935 podStartE2EDuration="50.332998615s" podCreationTimestamp="2025-11-21 13:59:54 +0000 UTC" firstStartedPulling="2025-11-21 13:59:55.752716857 +0000 UTC m=+1669.874249409" lastFinishedPulling="2025-11-21 14:00:43.715104537 +0000 UTC m=+1717.836637089" observedRunningTime="2025-11-21 14:00:44.31526495 +0000 UTC m=+1718.436797682" watchObservedRunningTime="2025-11-21 14:00:44.332998615 +0000 UTC m=+1718.454531177" Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.346130 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" podStartSLOduration=2.346098626 podStartE2EDuration="2.346098626s" podCreationTimestamp="2025-11-21 14:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:44.339373902 +0000 UTC m=+1718.460906474" watchObservedRunningTime="2025-11-21 14:00:44.346098626 +0000 UTC m=+1718.467631178" Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.379259 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-g52dw"] Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.395100 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-g52dw"] Nov 21 14:00:44 crc kubenswrapper[4904]: I1121 14:00:44.540121 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" path="/var/lib/kubelet/pods/d6283671-4b23-4a54-a7bb-6e40619d5ea9/volumes" Nov 21 14:00:45 crc kubenswrapper[4904]: I1121 14:00:45.227497 4904 scope.go:117] "RemoveContainer" containerID="304a99c8c39d4dcf0aad03daee2fa3f8f5a0e9640fd50d162bcc018ac8bdced2" Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.338075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" event={"ID":"55eba19c-92ed-4a40-81cb-9085d6becd76","Type":"ContainerStarted","Data":"872b935c0ea62c6e62c5930d5e6a6812a1d34b092999bf166f442632c7c96ed2"} Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.338836 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.339407 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5bd7cc4489-fmh8v" event={"ID":"da6b9694-5408-45ce-8e1c-3cf05c404837","Type":"ContainerStarted","Data":"c608d5b2b84a94da00ac337248b0e25f2dc8d1589043d83a287e775b6b0c1272"} Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.339689 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.358466 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" podStartSLOduration=2.688224619 podStartE2EDuration="4.358446409s" podCreationTimestamp="2025-11-21 14:00:42 +0000 UTC" firstStartedPulling="2025-11-21 14:00:43.628010713 +0000 UTC m=+1717.749543265" lastFinishedPulling="2025-11-21 14:00:45.298232503 +0000 UTC m=+1719.419765055" observedRunningTime="2025-11-21 14:00:46.354764008 +0000 UTC m=+1720.476296570" watchObservedRunningTime="2025-11-21 14:00:46.358446409 +0000 UTC m=+1720.479978961" Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.396720 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5bd7cc4489-fmh8v" podStartSLOduration=2.7182885150000002 podStartE2EDuration="4.396694685s" podCreationTimestamp="2025-11-21 14:00:42 +0000 UTC" firstStartedPulling="2025-11-21 14:00:43.619141486 +0000 UTC m=+1717.740674028" lastFinishedPulling="2025-11-21 14:00:45.297547646 +0000 UTC m=+1719.419080198" observedRunningTime="2025-11-21 14:00:46.377623328 +0000 UTC m=+1720.499155880" watchObservedRunningTime="2025-11-21 14:00:46.396694685 +0000 UTC m=+1720.518227237" Nov 21 14:00:46 crc kubenswrapper[4904]: I1121 14:00:46.514036 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:00:46 crc kubenswrapper[4904]: E1121 14:00:46.514551 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:00:54 crc kubenswrapper[4904]: I1121 14:00:54.651136 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7669f9847d-7bgvn" Nov 21 14:00:54 crc kubenswrapper[4904]: I1121 14:00:54.720510 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-655bcb7b5f-7cf7z"] Nov 21 14:00:54 crc kubenswrapper[4904]: I1121 14:00:54.721045 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" podUID="4316a7f8-1be9-410c-a536-28107c101ac5" containerName="heat-cfnapi" containerID="cri-o://f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6" gracePeriod=60 Nov 21 14:00:54 crc kubenswrapper[4904]: I1121 14:00:54.978274 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5bd7cc4489-fmh8v" Nov 21 14:00:55 crc kubenswrapper[4904]: I1121 14:00:55.059103 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67b7b4f58b-np6xc"] Nov 21 14:00:55 crc kubenswrapper[4904]: I1121 14:00:55.059451 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-67b7b4f58b-np6xc" podUID="17e8035b-39b0-4410-805f-635ed96e3e46" containerName="heat-api" containerID="cri-o://e84b41adf000ee575d1c1e8d841c01ba15be0896116811d265227a22b5cec2d5" gracePeriod=60 Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.007613 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs"] Nov 21 14:00:56 crc kubenswrapper[4904]: E1121 14:00:56.008312 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerName="init" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.008330 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerName="init" Nov 21 14:00:56 crc kubenswrapper[4904]: E1121 14:00:56.008381 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerName="dnsmasq-dns" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.008388 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerName="dnsmasq-dns" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.008733 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6283671-4b23-4a54-a7bb-6e40619d5ea9" containerName="dnsmasq-dns" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.011584 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.014720 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.015128 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.015330 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.017097 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.025424 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs"] Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.087019 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpmjs\" (UniqueName: \"kubernetes.io/projected/16c65fc0-a096-4ee6-ba62-cc26be39eda9-kube-api-access-tpmjs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.087100 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.087513 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.087976 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.190299 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpmjs\" (UniqueName: \"kubernetes.io/projected/16c65fc0-a096-4ee6-ba62-cc26be39eda9-kube-api-access-tpmjs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.190785 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.190875 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.190941 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.200439 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.201078 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.207432 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.210216 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpmjs\" (UniqueName: \"kubernetes.io/projected/16c65fc0-a096-4ee6-ba62-cc26be39eda9-kube-api-access-tpmjs\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:56 crc kubenswrapper[4904]: I1121 14:00:56.356720 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:00:57 crc kubenswrapper[4904]: I1121 14:00:57.064412 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs"] Nov 21 14:00:57 crc kubenswrapper[4904]: I1121 14:00:57.503225 4904 generic.go:334] "Generic (PLEG): container finished" podID="8b1f6e46-f0d4-421a-bb86-48f1d622cd97" containerID="5e068ded51b1fc80e9eb221543458bbb23916ecee5a97117b15a7ff43582e730" exitCode=0 Nov 21 14:00:57 crc kubenswrapper[4904]: I1121 14:00:57.503316 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b1f6e46-f0d4-421a-bb86-48f1d622cd97","Type":"ContainerDied","Data":"5e068ded51b1fc80e9eb221543458bbb23916ecee5a97117b15a7ff43582e730"} Nov 21 14:00:57 crc kubenswrapper[4904]: I1121 14:00:57.508919 4904 generic.go:334] "Generic (PLEG): container finished" podID="fdcbae10-10ee-4213-8758-ce56fbe6a27e" containerID="6d09a8db82a01e69368ac18babb5382fc21c1f1af9025bf1a04042033691e901" exitCode=0 Nov 21 14:00:57 crc kubenswrapper[4904]: I1121 14:00:57.509026 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fdcbae10-10ee-4213-8758-ce56fbe6a27e","Type":"ContainerDied","Data":"6d09a8db82a01e69368ac18babb5382fc21c1f1af9025bf1a04042033691e901"} Nov 21 14:00:57 crc kubenswrapper[4904]: I1121 14:00:57.515521 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" event={"ID":"16c65fc0-a096-4ee6-ba62-cc26be39eda9","Type":"ContainerStarted","Data":"91a4814b16b325866669f70eb6233a97814589f3cd2f97bb5430650afd68afaf"} Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.033860 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" podUID="4316a7f8-1be9-410c-a536-28107c101ac5" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.206:8000/healthcheck\": dial tcp 10.217.0.206:8000: connect: connection refused" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.343874 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-67b7b4f58b-np6xc" podUID="17e8035b-39b0-4410-805f-635ed96e3e46" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.205:8004/healthcheck\": read tcp 10.217.0.2:45158->10.217.0.205:8004: read: connection reset by peer" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.491576 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.514578 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:00:58 crc kubenswrapper[4904]: E1121 14:00:58.515148 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.537059 4904 generic.go:334] "Generic (PLEG): container finished" podID="17e8035b-39b0-4410-805f-635ed96e3e46" containerID="e84b41adf000ee575d1c1e8d841c01ba15be0896116811d265227a22b5cec2d5" exitCode=0 Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.540252 4904 generic.go:334] "Generic (PLEG): container finished" podID="4316a7f8-1be9-410c-a536-28107c101ac5" containerID="f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6" exitCode=0 Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.540354 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.645923 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67b7b4f58b-np6xc" event={"ID":"17e8035b-39b0-4410-805f-635ed96e3e46","Type":"ContainerDied","Data":"e84b41adf000ee575d1c1e8d841c01ba15be0896116811d265227a22b5cec2d5"} Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.646415 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b1f6e46-f0d4-421a-bb86-48f1d622cd97","Type":"ContainerStarted","Data":"f33f552c3bb3e87699aad0fae12e1d5b0259e8af4b59e04ed048b71eb2519000"} Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.646445 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" event={"ID":"4316a7f8-1be9-410c-a536-28107c101ac5","Type":"ContainerDied","Data":"f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6"} Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.646465 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-655bcb7b5f-7cf7z" event={"ID":"4316a7f8-1be9-410c-a536-28107c101ac5","Type":"ContainerDied","Data":"5d8929bab13474664f08aa6b6690fabd6354dd70e4fe47a31f84060d2ddddd7e"} Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.646479 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"fdcbae10-10ee-4213-8758-ce56fbe6a27e","Type":"ContainerStarted","Data":"2e789a44e278f7a57a0d4ec81e3e9f4a6a8abd0a4d543ceafd09b8be655611c0"} Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.646823 4904 scope.go:117] "RemoveContainer" containerID="f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.647140 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.647266 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.671196 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-internal-tls-certs\") pod \"4316a7f8-1be9-410c-a536-28107c101ac5\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.671279 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-public-tls-certs\") pod \"4316a7f8-1be9-410c-a536-28107c101ac5\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.671506 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data-custom\") pod \"4316a7f8-1be9-410c-a536-28107c101ac5\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.671709 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-combined-ca-bundle\") pod \"4316a7f8-1be9-410c-a536-28107c101ac5\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.671741 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data\") pod \"4316a7f8-1be9-410c-a536-28107c101ac5\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.671858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x85w6\" (UniqueName: \"kubernetes.io/projected/4316a7f8-1be9-410c-a536-28107c101ac5-kube-api-access-x85w6\") pod \"4316a7f8-1be9-410c-a536-28107c101ac5\" (UID: \"4316a7f8-1be9-410c-a536-28107c101ac5\") " Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.709532 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4316a7f8-1be9-410c-a536-28107c101ac5-kube-api-access-x85w6" (OuterVolumeSpecName: "kube-api-access-x85w6") pod "4316a7f8-1be9-410c-a536-28107c101ac5" (UID: "4316a7f8-1be9-410c-a536-28107c101ac5"). InnerVolumeSpecName "kube-api-access-x85w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.712024 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4316a7f8-1be9-410c-a536-28107c101ac5" (UID: "4316a7f8-1be9-410c-a536-28107c101ac5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.752409 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=48.752380296 podStartE2EDuration="48.752380296s" podCreationTimestamp="2025-11-21 14:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:58.732396497 +0000 UTC m=+1732.853929069" watchObservedRunningTime="2025-11-21 14:00:58.752380296 +0000 UTC m=+1732.873912848" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.752877 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4316a7f8-1be9-410c-a536-28107c101ac5" (UID: "4316a7f8-1be9-410c-a536-28107c101ac5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.783847 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4316a7f8-1be9-410c-a536-28107c101ac5" (UID: "4316a7f8-1be9-410c-a536-28107c101ac5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.785124 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.785156 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.785169 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x85w6\" (UniqueName: \"kubernetes.io/projected/4316a7f8-1be9-410c-a536-28107c101ac5-kube-api-access-x85w6\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.785185 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.846207 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.846172924 podStartE2EDuration="48.846172924s" podCreationTimestamp="2025-11-21 14:00:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:00:58.764040783 +0000 UTC m=+1732.885573355" watchObservedRunningTime="2025-11-21 14:00:58.846172924 +0000 UTC m=+1732.967705476" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.871479 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data" (OuterVolumeSpecName: "config-data") pod "4316a7f8-1be9-410c-a536-28107c101ac5" (UID: "4316a7f8-1be9-410c-a536-28107c101ac5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.888282 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.910874 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4316a7f8-1be9-410c-a536-28107c101ac5" (UID: "4316a7f8-1be9-410c-a536-28107c101ac5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.994466 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4316a7f8-1be9-410c-a536-28107c101ac5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:58 crc kubenswrapper[4904]: I1121 14:00:58.999898 4904 scope.go:117] "RemoveContainer" containerID="f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6" Nov 21 14:00:59 crc kubenswrapper[4904]: E1121 14:00:59.000975 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6\": container with ID starting with f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6 not found: ID does not exist" containerID="f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.001042 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6"} err="failed to get container status \"f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6\": rpc error: code = NotFound desc = could not find container \"f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6\": container with ID starting with f153d81ccfccae72de92f6e6030057d76753b9829e3b04dae92039c08d87d3c6 not found: ID does not exist" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.045610 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.192789 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-655bcb7b5f-7cf7z"] Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.200874 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data\") pod \"17e8035b-39b0-4410-805f-635ed96e3e46\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.201050 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-combined-ca-bundle\") pod \"17e8035b-39b0-4410-805f-635ed96e3e46\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.201132 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shkgw\" (UniqueName: \"kubernetes.io/projected/17e8035b-39b0-4410-805f-635ed96e3e46-kube-api-access-shkgw\") pod \"17e8035b-39b0-4410-805f-635ed96e3e46\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.201211 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data-custom\") pod \"17e8035b-39b0-4410-805f-635ed96e3e46\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.201322 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-public-tls-certs\") pod \"17e8035b-39b0-4410-805f-635ed96e3e46\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.201381 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-internal-tls-certs\") pod \"17e8035b-39b0-4410-805f-635ed96e3e46\" (UID: \"17e8035b-39b0-4410-805f-635ed96e3e46\") " Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.204583 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "17e8035b-39b0-4410-805f-635ed96e3e46" (UID: "17e8035b-39b0-4410-805f-635ed96e3e46"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.207149 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e8035b-39b0-4410-805f-635ed96e3e46-kube-api-access-shkgw" (OuterVolumeSpecName: "kube-api-access-shkgw") pod "17e8035b-39b0-4410-805f-635ed96e3e46" (UID: "17e8035b-39b0-4410-805f-635ed96e3e46"). InnerVolumeSpecName "kube-api-access-shkgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.208185 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-655bcb7b5f-7cf7z"] Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.239123 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17e8035b-39b0-4410-805f-635ed96e3e46" (UID: "17e8035b-39b0-4410-805f-635ed96e3e46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.283704 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data" (OuterVolumeSpecName: "config-data") pod "17e8035b-39b0-4410-805f-635ed96e3e46" (UID: "17e8035b-39b0-4410-805f-635ed96e3e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.290698 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "17e8035b-39b0-4410-805f-635ed96e3e46" (UID: "17e8035b-39b0-4410-805f-635ed96e3e46"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.304782 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.304825 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.304836 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.304849 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shkgw\" (UniqueName: \"kubernetes.io/projected/17e8035b-39b0-4410-805f-635ed96e3e46-kube-api-access-shkgw\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.304860 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.310725 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "17e8035b-39b0-4410-805f-635ed96e3e46" (UID: "17e8035b-39b0-4410-805f-635ed96e3e46"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.407502 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17e8035b-39b0-4410-805f-635ed96e3e46-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.567257 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-67b7b4f58b-np6xc" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.568740 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-67b7b4f58b-np6xc" event={"ID":"17e8035b-39b0-4410-805f-635ed96e3e46","Type":"ContainerDied","Data":"20adc14a7ae5314f6eb3939472f077c96cbee62a652201f3f10d38f18eee09c9"} Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.568796 4904 scope.go:117] "RemoveContainer" containerID="e84b41adf000ee575d1c1e8d841c01ba15be0896116811d265227a22b5cec2d5" Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.645853 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-67b7b4f58b-np6xc"] Nov 21 14:00:59 crc kubenswrapper[4904]: I1121 14:00:59.655871 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-67b7b4f58b-np6xc"] Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.146833 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395561-8g6qs"] Nov 21 14:01:00 crc kubenswrapper[4904]: E1121 14:01:00.147328 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4316a7f8-1be9-410c-a536-28107c101ac5" containerName="heat-cfnapi" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.147352 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="4316a7f8-1be9-410c-a536-28107c101ac5" containerName="heat-cfnapi" Nov 21 14:01:00 crc kubenswrapper[4904]: E1121 14:01:00.147373 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e8035b-39b0-4410-805f-635ed96e3e46" containerName="heat-api" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.147382 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e8035b-39b0-4410-805f-635ed96e3e46" containerName="heat-api" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.153991 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e8035b-39b0-4410-805f-635ed96e3e46" containerName="heat-api" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.154032 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="4316a7f8-1be9-410c-a536-28107c101ac5" containerName="heat-cfnapi" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.155281 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.164713 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395561-8g6qs"] Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.231367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdfpt\" (UniqueName: \"kubernetes.io/projected/ba2790bd-7e69-4181-8240-95640e6696e4-kube-api-access-sdfpt\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.231432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-combined-ca-bundle\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.231821 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-fernet-keys\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.232037 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-config-data\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.337687 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-combined-ca-bundle\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.339230 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-fernet-keys\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.341021 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-config-data\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.342408 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdfpt\" (UniqueName: \"kubernetes.io/projected/ba2790bd-7e69-4181-8240-95640e6696e4-kube-api-access-sdfpt\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.347559 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-combined-ca-bundle\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.356920 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-fernet-keys\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.357100 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-config-data\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.361421 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdfpt\" (UniqueName: \"kubernetes.io/projected/ba2790bd-7e69-4181-8240-95640e6696e4-kube-api-access-sdfpt\") pod \"keystone-cron-29395561-8g6qs\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.522599 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.529448 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e8035b-39b0-4410-805f-635ed96e3e46" path="/var/lib/kubelet/pods/17e8035b-39b0-4410-805f-635ed96e3e46/volumes" Nov 21 14:01:00 crc kubenswrapper[4904]: I1121 14:01:00.530543 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4316a7f8-1be9-410c-a536-28107c101ac5" path="/var/lib/kubelet/pods/4316a7f8-1be9-410c-a536-28107c101ac5/volumes" Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.208942 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395561-8g6qs"] Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.609055 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-8g6qs" event={"ID":"ba2790bd-7e69-4181-8240-95640e6696e4","Type":"ContainerStarted","Data":"4473d93f55ae22c1eaac170de812e0d9046c331c5c483432db7f78757dc54e4f"} Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.609503 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-8g6qs" event={"ID":"ba2790bd-7e69-4181-8240-95640e6696e4","Type":"ContainerStarted","Data":"05e28f8a69e05610e94c6e7e1bea50fb1c511b3f70a7c6bf2d7c9ce29b7f14e5"} Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.635854 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395561-8g6qs" podStartSLOduration=2.635819 podStartE2EDuration="2.635819s" podCreationTimestamp="2025-11-21 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:01:02.630341776 +0000 UTC m=+1736.751874338" watchObservedRunningTime="2025-11-21 14:01:02.635819 +0000 UTC m=+1736.757351572" Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.657509 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5bcc58b9d9-fdhrg" Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.798717 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-f99cf4f76-lnstx"] Nov 21 14:01:02 crc kubenswrapper[4904]: I1121 14:01:02.799390 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-f99cf4f76-lnstx" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerName="heat-engine" containerID="cri-o://1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" gracePeriod=60 Nov 21 14:01:05 crc kubenswrapper[4904]: I1121 14:01:05.648596 4904 generic.go:334] "Generic (PLEG): container finished" podID="ba2790bd-7e69-4181-8240-95640e6696e4" containerID="4473d93f55ae22c1eaac170de812e0d9046c331c5c483432db7f78757dc54e4f" exitCode=0 Nov 21 14:01:05 crc kubenswrapper[4904]: I1121 14:01:05.650183 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-8g6qs" event={"ID":"ba2790bd-7e69-4181-8240-95640e6696e4","Type":"ContainerDied","Data":"4473d93f55ae22c1eaac170de812e0d9046c331c5c483432db7f78757dc54e4f"} Nov 21 14:01:05 crc kubenswrapper[4904]: E1121 14:01:05.890759 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:01:05 crc kubenswrapper[4904]: E1121 14:01:05.894206 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:01:05 crc kubenswrapper[4904]: E1121 14:01:05.896225 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:01:05 crc kubenswrapper[4904]: E1121 14:01:05.896316 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-f99cf4f76-lnstx" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerName="heat-engine" Nov 21 14:01:08 crc kubenswrapper[4904]: I1121 14:01:08.779900 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-ff79s"] Nov 21 14:01:08 crc kubenswrapper[4904]: I1121 14:01:08.793852 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-ff79s"] Nov 21 14:01:08 crc kubenswrapper[4904]: I1121 14:01:08.957341 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-g7z2w"] Nov 21 14:01:08 crc kubenswrapper[4904]: I1121 14:01:08.959809 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:08 crc kubenswrapper[4904]: I1121 14:01:08.964405 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.002277 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-g7z2w"] Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.021698 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk64x\" (UniqueName: \"kubernetes.io/projected/246116d1-5334-46cf-bcb6-dd2223cf67d7-kube-api-access-dk64x\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.022155 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-config-data\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.022286 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-scripts\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.022478 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-combined-ca-bundle\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.124914 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-config-data\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.124985 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-scripts\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.125097 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-combined-ca-bundle\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.125225 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk64x\" (UniqueName: \"kubernetes.io/projected/246116d1-5334-46cf-bcb6-dd2223cf67d7-kube-api-access-dk64x\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.133990 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-scripts\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.140717 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-config-data\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.140892 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-combined-ca-bundle\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.143710 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk64x\" (UniqueName: \"kubernetes.io/projected/246116d1-5334-46cf-bcb6-dd2223cf67d7-kube-api-access-dk64x\") pod \"aodh-db-sync-g7z2w\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:09 crc kubenswrapper[4904]: I1121 14:01:09.304474 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:10 crc kubenswrapper[4904]: I1121 14:01:10.538696 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54872f28-88cc-4a28-936c-7271bc0ad2b1" path="/var/lib/kubelet/pods/54872f28-88cc-4a28-936c-7271bc0ad2b1/volumes" Nov 21 14:01:11 crc kubenswrapper[4904]: I1121 14:01:11.297945 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 21 14:01:11 crc kubenswrapper[4904]: I1121 14:01:11.482964 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 21 14:01:11 crc kubenswrapper[4904]: I1121 14:01:11.515202 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:01:11 crc kubenswrapper[4904]: E1121 14:01:11.515475 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:01:15 crc kubenswrapper[4904]: E1121 14:01:15.890281 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:01:15 crc kubenswrapper[4904]: E1121 14:01:15.893045 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:01:15 crc kubenswrapper[4904]: E1121 14:01:15.895050 4904 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 21 14:01:15 crc kubenswrapper[4904]: E1121 14:01:15.895105 4904 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-f99cf4f76-lnstx" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerName="heat-engine" Nov 21 14:01:19 crc kubenswrapper[4904]: E1121 14:01:19.111525 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 21 14:01:19 crc kubenswrapper[4904]: E1121 14:01:19.112210 4904 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 21 14:01:19 crc kubenswrapper[4904]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 21 14:01:19 crc kubenswrapper[4904]: - hosts: all Nov 21 14:01:19 crc kubenswrapper[4904]: strategy: linear Nov 21 14:01:19 crc kubenswrapper[4904]: tasks: Nov 21 14:01:19 crc kubenswrapper[4904]: - name: Enable podified-repos Nov 21 14:01:19 crc kubenswrapper[4904]: become: true Nov 21 14:01:19 crc kubenswrapper[4904]: ansible.builtin.shell: | Nov 21 14:01:19 crc kubenswrapper[4904]: set -euxo pipefail Nov 21 14:01:19 crc kubenswrapper[4904]: pushd /var/tmp Nov 21 14:01:19 crc kubenswrapper[4904]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 21 14:01:19 crc kubenswrapper[4904]: pushd repo-setup-main Nov 21 14:01:19 crc kubenswrapper[4904]: python3 -m venv ./venv Nov 21 14:01:19 crc kubenswrapper[4904]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 21 14:01:19 crc kubenswrapper[4904]: ./venv/bin/repo-setup current-podified -b antelope Nov 21 14:01:19 crc kubenswrapper[4904]: popd Nov 21 14:01:19 crc kubenswrapper[4904]: rm -rf repo-setup-main Nov 21 14:01:19 crc kubenswrapper[4904]: Nov 21 14:01:19 crc kubenswrapper[4904]: Nov 21 14:01:19 crc kubenswrapper[4904]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 21 14:01:19 crc kubenswrapper[4904]: edpm_override_hosts: openstack-edpm-ipam Nov 21 14:01:19 crc kubenswrapper[4904]: edpm_service_type: repo-setup Nov 21 14:01:19 crc kubenswrapper[4904]: Nov 21 14:01:19 crc kubenswrapper[4904]: Nov 21 14:01:19 crc kubenswrapper[4904]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpmjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs_openstack(16c65fc0-a096-4ee6-ba62-cc26be39eda9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 21 14:01:19 crc kubenswrapper[4904]: > logger="UnhandledError" Nov 21 14:01:19 crc kubenswrapper[4904]: E1121 14:01:19.114173 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" podUID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.176837 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.302833 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-fernet-keys\") pod \"ba2790bd-7e69-4181-8240-95640e6696e4\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.308309 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-config-data\") pod \"ba2790bd-7e69-4181-8240-95640e6696e4\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.308360 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdfpt\" (UniqueName: \"kubernetes.io/projected/ba2790bd-7e69-4181-8240-95640e6696e4-kube-api-access-sdfpt\") pod \"ba2790bd-7e69-4181-8240-95640e6696e4\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.308425 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-combined-ca-bundle\") pod \"ba2790bd-7e69-4181-8240-95640e6696e4\" (UID: \"ba2790bd-7e69-4181-8240-95640e6696e4\") " Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.318538 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ba2790bd-7e69-4181-8240-95640e6696e4" (UID: "ba2790bd-7e69-4181-8240-95640e6696e4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.319245 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2790bd-7e69-4181-8240-95640e6696e4-kube-api-access-sdfpt" (OuterVolumeSpecName: "kube-api-access-sdfpt") pod "ba2790bd-7e69-4181-8240-95640e6696e4" (UID: "ba2790bd-7e69-4181-8240-95640e6696e4"). InnerVolumeSpecName "kube-api-access-sdfpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.390266 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba2790bd-7e69-4181-8240-95640e6696e4" (UID: "ba2790bd-7e69-4181-8240-95640e6696e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.414517 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.414594 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdfpt\" (UniqueName: \"kubernetes.io/projected/ba2790bd-7e69-4181-8240-95640e6696e4-kube-api-access-sdfpt\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.414609 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.472384 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-config-data" (OuterVolumeSpecName: "config-data") pod "ba2790bd-7e69-4181-8240-95640e6696e4" (UID: "ba2790bd-7e69-4181-8240-95640e6696e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.516848 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2790bd-7e69-4181-8240-95640e6696e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.698400 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-g7z2w"] Nov 21 14:01:19 crc kubenswrapper[4904]: W1121 14:01:19.699536 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod246116d1_5334_46cf_bcb6_dd2223cf67d7.slice/crio-55bf3f6c691722b7a69c0a57626e3a57f9978ed6ca53f74ff30731e649f087ed WatchSource:0}: Error finding container 55bf3f6c691722b7a69c0a57626e3a57f9978ed6ca53f74ff30731e649f087ed: Status 404 returned error can't find the container with id 55bf3f6c691722b7a69c0a57626e3a57f9978ed6ca53f74ff30731e649f087ed Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.899272 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-g7z2w" event={"ID":"246116d1-5334-46cf-bcb6-dd2223cf67d7","Type":"ContainerStarted","Data":"55bf3f6c691722b7a69c0a57626e3a57f9978ed6ca53f74ff30731e649f087ed"} Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.901587 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395561-8g6qs" event={"ID":"ba2790bd-7e69-4181-8240-95640e6696e4","Type":"ContainerDied","Data":"05e28f8a69e05610e94c6e7e1bea50fb1c511b3f70a7c6bf2d7c9ce29b7f14e5"} Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.901642 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05e28f8a69e05610e94c6e7e1bea50fb1c511b3f70a7c6bf2d7c9ce29b7f14e5" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.901718 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395561-8g6qs" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.903274 4904 generic.go:334] "Generic (PLEG): container finished" podID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" exitCode=0 Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.903344 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-f99cf4f76-lnstx" event={"ID":"9c8eecc3-a2e3-427e-af0e-adc6e08416ad","Type":"ContainerDied","Data":"1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b"} Nov 21 14:01:19 crc kubenswrapper[4904]: E1121 14:01:19.905881 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" podUID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" Nov 21 14:01:19 crc kubenswrapper[4904]: I1121 14:01:19.948553 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.140714 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data\") pod \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.141497 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l26fd\" (UniqueName: \"kubernetes.io/projected/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-kube-api-access-l26fd\") pod \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.142850 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data-custom\") pod \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.142934 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-combined-ca-bundle\") pod \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\" (UID: \"9c8eecc3-a2e3-427e-af0e-adc6e08416ad\") " Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.147085 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-kube-api-access-l26fd" (OuterVolumeSpecName: "kube-api-access-l26fd") pod "9c8eecc3-a2e3-427e-af0e-adc6e08416ad" (UID: "9c8eecc3-a2e3-427e-af0e-adc6e08416ad"). InnerVolumeSpecName "kube-api-access-l26fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.148456 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c8eecc3-a2e3-427e-af0e-adc6e08416ad" (UID: "9c8eecc3-a2e3-427e-af0e-adc6e08416ad"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.177413 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c8eecc3-a2e3-427e-af0e-adc6e08416ad" (UID: "9c8eecc3-a2e3-427e-af0e-adc6e08416ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.220324 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data" (OuterVolumeSpecName: "config-data") pod "9c8eecc3-a2e3-427e-af0e-adc6e08416ad" (UID: "9c8eecc3-a2e3-427e-af0e-adc6e08416ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.244630 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.244689 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.244701 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l26fd\" (UniqueName: \"kubernetes.io/projected/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-kube-api-access-l26fd\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.244716 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c8eecc3-a2e3-427e-af0e-adc6e08416ad-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.922377 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-f99cf4f76-lnstx" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.924016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-f99cf4f76-lnstx" event={"ID":"9c8eecc3-a2e3-427e-af0e-adc6e08416ad","Type":"ContainerDied","Data":"ec66aca68da23bc021ff29b7847472bf67d1e5830c092008ae1d8de2f73857c2"} Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.924134 4904 scope.go:117] "RemoveContainer" containerID="1031ecb56fb23dd0600b8d10efe46d3c36e3dfcae4620781fbbd414d39ddc73b" Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.966690 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-f99cf4f76-lnstx"] Nov 21 14:01:20 crc kubenswrapper[4904]: I1121 14:01:20.982397 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-f99cf4f76-lnstx"] Nov 21 14:01:22 crc kubenswrapper[4904]: I1121 14:01:22.194234 4904 scope.go:117] "RemoveContainer" containerID="33b80b32c15553705770185a11039ca099743a6432758a78e91caac7ae8b8c5e" Nov 21 14:01:22 crc kubenswrapper[4904]: I1121 14:01:22.529044 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" path="/var/lib/kubelet/pods/9c8eecc3-a2e3-427e-af0e-adc6e08416ad/volumes" Nov 21 14:01:23 crc kubenswrapper[4904]: I1121 14:01:23.514148 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:01:23 crc kubenswrapper[4904]: E1121 14:01:23.515113 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:01:24 crc kubenswrapper[4904]: I1121 14:01:24.997516 4904 scope.go:117] "RemoveContainer" containerID="189ffcc1823f66d47f3beea3fd56132aa9f41d3dddb41d8cf31f37c656129911" Nov 21 14:01:25 crc kubenswrapper[4904]: I1121 14:01:25.145588 4904 scope.go:117] "RemoveContainer" containerID="e59eeb757a8701354e7f2f7a476236a861b9748fe72833b87c930aa3d3dae7df" Nov 21 14:01:25 crc kubenswrapper[4904]: I1121 14:01:25.995312 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-g7z2w" event={"ID":"246116d1-5334-46cf-bcb6-dd2223cf67d7","Type":"ContainerStarted","Data":"4ba5431600e6a2f6cc6996d335ae22dc3d3d1a58cbb442b8052442e67bc9d915"} Nov 21 14:01:26 crc kubenswrapper[4904]: I1121 14:01:26.019629 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-g7z2w" podStartSLOduration=12.195679276 podStartE2EDuration="18.01960441s" podCreationTimestamp="2025-11-21 14:01:08 +0000 UTC" firstStartedPulling="2025-11-21 14:01:19.70234283 +0000 UTC m=+1753.823875372" lastFinishedPulling="2025-11-21 14:01:25.526267944 +0000 UTC m=+1759.647800506" observedRunningTime="2025-11-21 14:01:26.016608737 +0000 UTC m=+1760.138141289" watchObservedRunningTime="2025-11-21 14:01:26.01960441 +0000 UTC m=+1760.141136952" Nov 21 14:01:29 crc kubenswrapper[4904]: I1121 14:01:29.070956 4904 generic.go:334] "Generic (PLEG): container finished" podID="246116d1-5334-46cf-bcb6-dd2223cf67d7" containerID="4ba5431600e6a2f6cc6996d335ae22dc3d3d1a58cbb442b8052442e67bc9d915" exitCode=0 Nov 21 14:01:29 crc kubenswrapper[4904]: I1121 14:01:29.071470 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-g7z2w" event={"ID":"246116d1-5334-46cf-bcb6-dd2223cf67d7","Type":"ContainerDied","Data":"4ba5431600e6a2f6cc6996d335ae22dc3d3d1a58cbb442b8052442e67bc9d915"} Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.508749 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.635624 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-combined-ca-bundle\") pod \"246116d1-5334-46cf-bcb6-dd2223cf67d7\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.635687 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk64x\" (UniqueName: \"kubernetes.io/projected/246116d1-5334-46cf-bcb6-dd2223cf67d7-kube-api-access-dk64x\") pod \"246116d1-5334-46cf-bcb6-dd2223cf67d7\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.635753 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-scripts\") pod \"246116d1-5334-46cf-bcb6-dd2223cf67d7\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.635808 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-config-data\") pod \"246116d1-5334-46cf-bcb6-dd2223cf67d7\" (UID: \"246116d1-5334-46cf-bcb6-dd2223cf67d7\") " Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.646238 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/246116d1-5334-46cf-bcb6-dd2223cf67d7-kube-api-access-dk64x" (OuterVolumeSpecName: "kube-api-access-dk64x") pod "246116d1-5334-46cf-bcb6-dd2223cf67d7" (UID: "246116d1-5334-46cf-bcb6-dd2223cf67d7"). InnerVolumeSpecName "kube-api-access-dk64x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.646271 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-scripts" (OuterVolumeSpecName: "scripts") pod "246116d1-5334-46cf-bcb6-dd2223cf67d7" (UID: "246116d1-5334-46cf-bcb6-dd2223cf67d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.681044 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-config-data" (OuterVolumeSpecName: "config-data") pod "246116d1-5334-46cf-bcb6-dd2223cf67d7" (UID: "246116d1-5334-46cf-bcb6-dd2223cf67d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.690905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "246116d1-5334-46cf-bcb6-dd2223cf67d7" (UID: "246116d1-5334-46cf-bcb6-dd2223cf67d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.738227 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.738269 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.738279 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/246116d1-5334-46cf-bcb6-dd2223cf67d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:30 crc kubenswrapper[4904]: I1121 14:01:30.738294 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk64x\" (UniqueName: \"kubernetes.io/projected/246116d1-5334-46cf-bcb6-dd2223cf67d7-kube-api-access-dk64x\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:31 crc kubenswrapper[4904]: I1121 14:01:31.101831 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-g7z2w" event={"ID":"246116d1-5334-46cf-bcb6-dd2223cf67d7","Type":"ContainerDied","Data":"55bf3f6c691722b7a69c0a57626e3a57f9978ed6ca53f74ff30731e649f087ed"} Nov 21 14:01:31 crc kubenswrapper[4904]: I1121 14:01:31.101887 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55bf3f6c691722b7a69c0a57626e3a57f9978ed6ca53f74ff30731e649f087ed" Nov 21 14:01:31 crc kubenswrapper[4904]: I1121 14:01:31.101993 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-g7z2w" Nov 21 14:01:33 crc kubenswrapper[4904]: I1121 14:01:33.213234 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:01:33 crc kubenswrapper[4904]: I1121 14:01:33.994273 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:33 crc kubenswrapper[4904]: I1121 14:01:33.994972 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-api" containerID="cri-o://a742b48696832c2016485f0b184f09de95c0419c776493851cb43530e0c7ce90" gracePeriod=30 Nov 21 14:01:33 crc kubenswrapper[4904]: I1121 14:01:33.995058 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-notifier" containerID="cri-o://52996f556100a56c896c4c229fbaf6a174b99256d9acbec5048925e4aaca5877" gracePeriod=30 Nov 21 14:01:33 crc kubenswrapper[4904]: I1121 14:01:33.995140 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-listener" containerID="cri-o://627e45cde73512938001be6413c7c771b94c9a51e925e0fe24910810af788fee" gracePeriod=30 Nov 21 14:01:33 crc kubenswrapper[4904]: I1121 14:01:33.995151 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-evaluator" containerID="cri-o://cf9cd5c6ae7ff150decc33c4aeb3a8dd86456f8f4e30272e0f281174544a55da" gracePeriod=30 Nov 21 14:01:34 crc kubenswrapper[4904]: I1121 14:01:34.160916 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" event={"ID":"16c65fc0-a096-4ee6-ba62-cc26be39eda9","Type":"ContainerStarted","Data":"6774cc8d57618da61cea01e300b7095f7363bf883fc4a165aaf5822e066f543f"} Nov 21 14:01:34 crc kubenswrapper[4904]: I1121 14:01:34.189615 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" podStartSLOduration=3.045248152 podStartE2EDuration="39.189588025s" podCreationTimestamp="2025-11-21 14:00:55 +0000 UTC" firstStartedPulling="2025-11-21 14:00:57.064703839 +0000 UTC m=+1731.186236391" lastFinishedPulling="2025-11-21 14:01:33.209043672 +0000 UTC m=+1767.330576264" observedRunningTime="2025-11-21 14:01:34.180776949 +0000 UTC m=+1768.302309501" watchObservedRunningTime="2025-11-21 14:01:34.189588025 +0000 UTC m=+1768.311120577" Nov 21 14:01:35 crc kubenswrapper[4904]: I1121 14:01:35.177139 4904 generic.go:334] "Generic (PLEG): container finished" podID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerID="cf9cd5c6ae7ff150decc33c4aeb3a8dd86456f8f4e30272e0f281174544a55da" exitCode=0 Nov 21 14:01:35 crc kubenswrapper[4904]: I1121 14:01:35.177575 4904 generic.go:334] "Generic (PLEG): container finished" podID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerID="a742b48696832c2016485f0b184f09de95c0419c776493851cb43530e0c7ce90" exitCode=0 Nov 21 14:01:35 crc kubenswrapper[4904]: I1121 14:01:35.177240 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerDied","Data":"cf9cd5c6ae7ff150decc33c4aeb3a8dd86456f8f4e30272e0f281174544a55da"} Nov 21 14:01:35 crc kubenswrapper[4904]: I1121 14:01:35.177633 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerDied","Data":"a742b48696832c2016485f0b184f09de95c0419c776493851cb43530e0c7ce90"} Nov 21 14:01:35 crc kubenswrapper[4904]: I1121 14:01:35.515219 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:01:35 crc kubenswrapper[4904]: E1121 14:01:35.515753 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.230198 4904 generic.go:334] "Generic (PLEG): container finished" podID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerID="627e45cde73512938001be6413c7c771b94c9a51e925e0fe24910810af788fee" exitCode=0 Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.231042 4904 generic.go:334] "Generic (PLEG): container finished" podID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerID="52996f556100a56c896c4c229fbaf6a174b99256d9acbec5048925e4aaca5877" exitCode=0 Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.230256 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerDied","Data":"627e45cde73512938001be6413c7c771b94c9a51e925e0fe24910810af788fee"} Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.231086 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerDied","Data":"52996f556100a56c896c4c229fbaf6a174b99256d9acbec5048925e4aaca5877"} Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.673802 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.805679 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7dpz\" (UniqueName: \"kubernetes.io/projected/06220aa0-c8a9-49c0-a1f6-f71022c409d6-kube-api-access-j7dpz\") pod \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.805759 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-config-data\") pod \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.805894 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-public-tls-certs\") pod \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.805918 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-internal-tls-certs\") pod \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.805972 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-combined-ca-bundle\") pod \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.806019 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-scripts\") pod \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\" (UID: \"06220aa0-c8a9-49c0-a1f6-f71022c409d6\") " Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.829937 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06220aa0-c8a9-49c0-a1f6-f71022c409d6-kube-api-access-j7dpz" (OuterVolumeSpecName: "kube-api-access-j7dpz") pod "06220aa0-c8a9-49c0-a1f6-f71022c409d6" (UID: "06220aa0-c8a9-49c0-a1f6-f71022c409d6"). InnerVolumeSpecName "kube-api-access-j7dpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.831858 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-scripts" (OuterVolumeSpecName: "scripts") pod "06220aa0-c8a9-49c0-a1f6-f71022c409d6" (UID: "06220aa0-c8a9-49c0-a1f6-f71022c409d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.909218 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "06220aa0-c8a9-49c0-a1f6-f71022c409d6" (UID: "06220aa0-c8a9-49c0-a1f6-f71022c409d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.910175 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.910500 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.910525 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7dpz\" (UniqueName: \"kubernetes.io/projected/06220aa0-c8a9-49c0-a1f6-f71022c409d6-kube-api-access-j7dpz\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.959311 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "06220aa0-c8a9-49c0-a1f6-f71022c409d6" (UID: "06220aa0-c8a9-49c0-a1f6-f71022c409d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.974931 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-config-data" (OuterVolumeSpecName: "config-data") pod "06220aa0-c8a9-49c0-a1f6-f71022c409d6" (UID: "06220aa0-c8a9-49c0-a1f6-f71022c409d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:39 crc kubenswrapper[4904]: I1121 14:01:39.988848 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06220aa0-c8a9-49c0-a1f6-f71022c409d6" (UID: "06220aa0-c8a9-49c0-a1f6-f71022c409d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.016566 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.016601 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.016617 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06220aa0-c8a9-49c0-a1f6-f71022c409d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.244281 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"06220aa0-c8a9-49c0-a1f6-f71022c409d6","Type":"ContainerDied","Data":"b4ade0be00be751b675cdd28f3be014cc4f9985b1acc5286d05f4cb1676b638d"} Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.244396 4904 scope.go:117] "RemoveContainer" containerID="627e45cde73512938001be6413c7c771b94c9a51e925e0fe24910810af788fee" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.244394 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.278496 4904 scope.go:117] "RemoveContainer" containerID="52996f556100a56c896c4c229fbaf6a174b99256d9acbec5048925e4aaca5877" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.298774 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.313359 4904 scope.go:117] "RemoveContainer" containerID="cf9cd5c6ae7ff150decc33c4aeb3a8dd86456f8f4e30272e0f281174544a55da" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.322196 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.342186 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.342975 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-listener" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343004 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-listener" Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.343043 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="246116d1-5334-46cf-bcb6-dd2223cf67d7" containerName="aodh-db-sync" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343051 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="246116d1-5334-46cf-bcb6-dd2223cf67d7" containerName="aodh-db-sync" Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.343071 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-evaluator" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343078 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-evaluator" Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.343089 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-notifier" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343095 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-notifier" Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.343115 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-api" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343122 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-api" Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.343138 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2790bd-7e69-4181-8240-95640e6696e4" containerName="keystone-cron" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343145 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2790bd-7e69-4181-8240-95640e6696e4" containerName="keystone-cron" Nov 21 14:01:40 crc kubenswrapper[4904]: E1121 14:01:40.343163 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerName="heat-engine" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343170 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerName="heat-engine" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343445 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-listener" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343478 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-api" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343491 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="246116d1-5334-46cf-bcb6-dd2223cf67d7" containerName="aodh-db-sync" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343510 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2790bd-7e69-4181-8240-95640e6696e4" containerName="keystone-cron" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343518 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-notifier" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343529 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8eecc3-a2e3-427e-af0e-adc6e08416ad" containerName="heat-engine" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.343541 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" containerName="aodh-evaluator" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.348070 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.353607 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.355460 4904 scope.go:117] "RemoveContainer" containerID="a742b48696832c2016485f0b184f09de95c0419c776493851cb43530e0c7ce90" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.359181 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.359566 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.359576 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.360077 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-lqfk6" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.362494 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.426602 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-public-tls-certs\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.426710 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-scripts\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.426771 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-internal-tls-certs\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.426794 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.426835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-config-data\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.426858 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msk45\" (UniqueName: \"kubernetes.io/projected/53e60150-0305-4106-8864-769576a7016c-kube-api-access-msk45\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.527774 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06220aa0-c8a9-49c0-a1f6-f71022c409d6" path="/var/lib/kubelet/pods/06220aa0-c8a9-49c0-a1f6-f71022c409d6/volumes" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.529236 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-public-tls-certs\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.530368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-scripts\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.530420 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-internal-tls-certs\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.530445 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.530485 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-config-data\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.530558 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msk45\" (UniqueName: \"kubernetes.io/projected/53e60150-0305-4106-8864-769576a7016c-kube-api-access-msk45\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.533996 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-public-tls-certs\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.534328 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-scripts\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.534917 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-config-data\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.535089 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-internal-tls-certs\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.539007 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53e60150-0305-4106-8864-769576a7016c-combined-ca-bundle\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.549789 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msk45\" (UniqueName: \"kubernetes.io/projected/53e60150-0305-4106-8864-769576a7016c-kube-api-access-msk45\") pod \"aodh-0\" (UID: \"53e60150-0305-4106-8864-769576a7016c\") " pod="openstack/aodh-0" Nov 21 14:01:40 crc kubenswrapper[4904]: I1121 14:01:40.675911 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 21 14:01:41 crc kubenswrapper[4904]: I1121 14:01:41.226798 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 21 14:01:41 crc kubenswrapper[4904]: I1121 14:01:41.280105 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53e60150-0305-4106-8864-769576a7016c","Type":"ContainerStarted","Data":"187d0e7d50759fc3810974a575bc93047a05936d0f80aa02a6f6159c5db599fa"} Nov 21 14:01:42 crc kubenswrapper[4904]: I1121 14:01:42.302434 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53e60150-0305-4106-8864-769576a7016c","Type":"ContainerStarted","Data":"5867ad2ff415249a664c1d5e108b37a283927822aebc1336d61a4e1cd8b345ff"} Nov 21 14:01:48 crc kubenswrapper[4904]: I1121 14:01:48.377342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53e60150-0305-4106-8864-769576a7016c","Type":"ContainerStarted","Data":"290db72a52ac0a91cb8fcb371c3d79e32eb518b1295076d84b1b7008409c710c"} Nov 21 14:01:49 crc kubenswrapper[4904]: I1121 14:01:49.401368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53e60150-0305-4106-8864-769576a7016c","Type":"ContainerStarted","Data":"3ffcad5bd4265e8a571fd94456834d96b87580e3adb9792006db6e0575dc90e9"} Nov 21 14:01:50 crc kubenswrapper[4904]: I1121 14:01:50.417215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"53e60150-0305-4106-8864-769576a7016c","Type":"ContainerStarted","Data":"ddd82e7ee980dc9364001b27a47bbc948a22174a492b55ce6f7985d5c3b46b8b"} Nov 21 14:01:50 crc kubenswrapper[4904]: I1121 14:01:50.418821 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" event={"ID":"16c65fc0-a096-4ee6-ba62-cc26be39eda9","Type":"ContainerDied","Data":"6774cc8d57618da61cea01e300b7095f7363bf883fc4a165aaf5822e066f543f"} Nov 21 14:01:50 crc kubenswrapper[4904]: I1121 14:01:50.418688 4904 generic.go:334] "Generic (PLEG): container finished" podID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" containerID="6774cc8d57618da61cea01e300b7095f7363bf883fc4a165aaf5822e066f543f" exitCode=0 Nov 21 14:01:50 crc kubenswrapper[4904]: I1121 14:01:50.474861 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.116574052 podStartE2EDuration="10.47483126s" podCreationTimestamp="2025-11-21 14:01:40 +0000 UTC" firstStartedPulling="2025-11-21 14:01:41.234332603 +0000 UTC m=+1775.355865155" lastFinishedPulling="2025-11-21 14:01:49.592589811 +0000 UTC m=+1783.714122363" observedRunningTime="2025-11-21 14:01:50.461601475 +0000 UTC m=+1784.583134027" watchObservedRunningTime="2025-11-21 14:01:50.47483126 +0000 UTC m=+1784.596363822" Nov 21 14:01:50 crc kubenswrapper[4904]: I1121 14:01:50.514392 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:01:50 crc kubenswrapper[4904]: E1121 14:01:50.515038 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.067085 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.157761 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpmjs\" (UniqueName: \"kubernetes.io/projected/16c65fc0-a096-4ee6-ba62-cc26be39eda9-kube-api-access-tpmjs\") pod \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.157924 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-repo-setup-combined-ca-bundle\") pod \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.158004 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-ssh-key\") pod \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.158054 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-inventory\") pod \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\" (UID: \"16c65fc0-a096-4ee6-ba62-cc26be39eda9\") " Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.165830 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c65fc0-a096-4ee6-ba62-cc26be39eda9-kube-api-access-tpmjs" (OuterVolumeSpecName: "kube-api-access-tpmjs") pod "16c65fc0-a096-4ee6-ba62-cc26be39eda9" (UID: "16c65fc0-a096-4ee6-ba62-cc26be39eda9"). InnerVolumeSpecName "kube-api-access-tpmjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.177574 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "16c65fc0-a096-4ee6-ba62-cc26be39eda9" (UID: "16c65fc0-a096-4ee6-ba62-cc26be39eda9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.194398 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "16c65fc0-a096-4ee6-ba62-cc26be39eda9" (UID: "16c65fc0-a096-4ee6-ba62-cc26be39eda9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.266889 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpmjs\" (UniqueName: \"kubernetes.io/projected/16c65fc0-a096-4ee6-ba62-cc26be39eda9-kube-api-access-tpmjs\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.266948 4904 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.266967 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.284882 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-inventory" (OuterVolumeSpecName: "inventory") pod "16c65fc0-a096-4ee6-ba62-cc26be39eda9" (UID: "16c65fc0-a096-4ee6-ba62-cc26be39eda9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.371513 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16c65fc0-a096-4ee6-ba62-cc26be39eda9-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.479507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" event={"ID":"16c65fc0-a096-4ee6-ba62-cc26be39eda9","Type":"ContainerDied","Data":"91a4814b16b325866669f70eb6233a97814589f3cd2f97bb5430650afd68afaf"} Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.480295 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a4814b16b325866669f70eb6233a97814589f3cd2f97bb5430650afd68afaf" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.480177 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.550178 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7"] Nov 21 14:01:52 crc kubenswrapper[4904]: E1121 14:01:52.550728 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.550751 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.550949 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.551737 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.554584 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.554929 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.555111 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.555345 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.562910 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7"] Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.688393 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.688517 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.688553 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qfp7\" (UniqueName: \"kubernetes.io/projected/59615e38-ebdb-4ce9-9922-942c2ff0d82c-kube-api-access-5qfp7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.688571 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.790142 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.790225 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.790253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qfp7\" (UniqueName: \"kubernetes.io/projected/59615e38-ebdb-4ce9-9922-942c2ff0d82c-kube-api-access-5qfp7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.790272 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.796422 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.796461 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.796560 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.808641 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qfp7\" (UniqueName: \"kubernetes.io/projected/59615e38-ebdb-4ce9-9922-942c2ff0d82c-kube-api-access-5qfp7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:52 crc kubenswrapper[4904]: I1121 14:01:52.872928 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:01:53 crc kubenswrapper[4904]: I1121 14:01:53.427884 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7"] Nov 21 14:01:53 crc kubenswrapper[4904]: I1121 14:01:53.492427 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" event={"ID":"59615e38-ebdb-4ce9-9922-942c2ff0d82c","Type":"ContainerStarted","Data":"8bfa4cf89de7b47d7b88d4405dcb9f887229bdb37a821cdc5f29591d43493b9f"} Nov 21 14:01:54 crc kubenswrapper[4904]: I1121 14:01:54.511055 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" event={"ID":"59615e38-ebdb-4ce9-9922-942c2ff0d82c","Type":"ContainerStarted","Data":"f2f8f2857561e04916f01f486dba33f942a00347eb1f6b6fcddbbd7376ec5d3a"} Nov 21 14:01:54 crc kubenswrapper[4904]: I1121 14:01:54.545139 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" podStartSLOduration=1.812949487 podStartE2EDuration="2.54511105s" podCreationTimestamp="2025-11-21 14:01:52 +0000 UTC" firstStartedPulling="2025-11-21 14:01:53.437081926 +0000 UTC m=+1787.558614498" lastFinishedPulling="2025-11-21 14:01:54.169243479 +0000 UTC m=+1788.290776061" observedRunningTime="2025-11-21 14:01:54.53328563 +0000 UTC m=+1788.654818192" watchObservedRunningTime="2025-11-21 14:01:54.54511105 +0000 UTC m=+1788.666643612" Nov 21 14:02:05 crc kubenswrapper[4904]: I1121 14:02:05.513546 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:02:05 crc kubenswrapper[4904]: E1121 14:02:05.514779 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:02:16 crc kubenswrapper[4904]: I1121 14:02:16.528856 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:02:16 crc kubenswrapper[4904]: E1121 14:02:16.530175 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:02:25 crc kubenswrapper[4904]: I1121 14:02:25.723688 4904 scope.go:117] "RemoveContainer" containerID="4d3df7d6bc827d40e499f0a77349b33a92fe2707bd35c365cbdad70cc5b42536" Nov 21 14:02:30 crc kubenswrapper[4904]: I1121 14:02:30.513940 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:02:30 crc kubenswrapper[4904]: I1121 14:02:30.979380 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"c440976231c18075c6b34421dc688eb2b29ec89c3d96f4545c581aa060fb19c0"} Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.167116 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnglv"] Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.171034 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.186408 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnglv"] Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.316066 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72j89\" (UniqueName: \"kubernetes.io/projected/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-kube-api-access-72j89\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.316614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-utilities\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.316795 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-catalog-content\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.419504 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72j89\" (UniqueName: \"kubernetes.io/projected/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-kube-api-access-72j89\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.419587 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-utilities\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.419671 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-catalog-content\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.420446 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-catalog-content\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.420565 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-utilities\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.443937 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72j89\" (UniqueName: \"kubernetes.io/projected/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-kube-api-access-72j89\") pod \"redhat-marketplace-rnglv\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:30 crc kubenswrapper[4904]: I1121 14:03:30.499557 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.040901 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnglv"] Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.556284 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvvhk"] Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.559119 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.580608 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvvhk"] Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.661080 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-utilities\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.661348 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-catalog-content\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.661475 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krcv6\" (UniqueName: \"kubernetes.io/projected/5c668860-8d8c-46a9-97c2-e7eb4decb43c-kube-api-access-krcv6\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.734689 4904 generic.go:334] "Generic (PLEG): container finished" podID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerID="0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c" exitCode=0 Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.734742 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerDied","Data":"0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c"} Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.734773 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerStarted","Data":"032586548b6dd6fd9043b8e5591f00d36daf9928fcd86b640c3d72625c5d28b2"} Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.763669 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-utilities\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.763935 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-catalog-content\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.764054 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krcv6\" (UniqueName: \"kubernetes.io/projected/5c668860-8d8c-46a9-97c2-e7eb4decb43c-kube-api-access-krcv6\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.764248 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-utilities\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.764467 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-catalog-content\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.797747 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krcv6\" (UniqueName: \"kubernetes.io/projected/5c668860-8d8c-46a9-97c2-e7eb4decb43c-kube-api-access-krcv6\") pod \"redhat-operators-fvvhk\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:31 crc kubenswrapper[4904]: I1121 14:03:31.898705 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:32 crc kubenswrapper[4904]: I1121 14:03:32.438174 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvvhk"] Nov 21 14:03:32 crc kubenswrapper[4904]: I1121 14:03:32.756312 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerStarted","Data":"8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71"} Nov 21 14:03:32 crc kubenswrapper[4904]: I1121 14:03:32.757153 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerStarted","Data":"bdbeb1c7ed69f3b7916d970d2df487624a5a31eff57dd2483bc37fbd45659abf"} Nov 21 14:03:32 crc kubenswrapper[4904]: I1121 14:03:32.758627 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerStarted","Data":"48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e"} Nov 21 14:03:33 crc kubenswrapper[4904]: I1121 14:03:33.775263 4904 generic.go:334] "Generic (PLEG): container finished" podID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerID="8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71" exitCode=0 Nov 21 14:03:33 crc kubenswrapper[4904]: I1121 14:03:33.775324 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerDied","Data":"8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71"} Nov 21 14:03:36 crc kubenswrapper[4904]: I1121 14:03:36.831153 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerStarted","Data":"da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94"} Nov 21 14:03:36 crc kubenswrapper[4904]: I1121 14:03:36.835857 4904 generic.go:334] "Generic (PLEG): container finished" podID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerID="48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e" exitCode=0 Nov 21 14:03:36 crc kubenswrapper[4904]: I1121 14:03:36.835921 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerDied","Data":"48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e"} Nov 21 14:03:38 crc kubenswrapper[4904]: I1121 14:03:38.862802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerStarted","Data":"fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930"} Nov 21 14:03:38 crc kubenswrapper[4904]: I1121 14:03:38.892220 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnglv" podStartSLOduration=3.287185935 podStartE2EDuration="8.892194861s" podCreationTimestamp="2025-11-21 14:03:30 +0000 UTC" firstStartedPulling="2025-11-21 14:03:31.737226626 +0000 UTC m=+1885.858759178" lastFinishedPulling="2025-11-21 14:03:37.342235562 +0000 UTC m=+1891.463768104" observedRunningTime="2025-11-21 14:03:38.890844598 +0000 UTC m=+1893.012377250" watchObservedRunningTime="2025-11-21 14:03:38.892194861 +0000 UTC m=+1893.013727423" Nov 21 14:03:40 crc kubenswrapper[4904]: I1121 14:03:40.500328 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:40 crc kubenswrapper[4904]: I1121 14:03:40.500901 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:40 crc kubenswrapper[4904]: I1121 14:03:40.590788 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:42 crc kubenswrapper[4904]: I1121 14:03:42.942899 4904 generic.go:334] "Generic (PLEG): container finished" podID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerID="da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94" exitCode=0 Nov 21 14:03:42 crc kubenswrapper[4904]: I1121 14:03:42.943393 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerDied","Data":"da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94"} Nov 21 14:03:42 crc kubenswrapper[4904]: I1121 14:03:42.949439 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:03:43 crc kubenswrapper[4904]: I1121 14:03:43.960560 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerStarted","Data":"2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a"} Nov 21 14:03:43 crc kubenswrapper[4904]: I1121 14:03:43.986012 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvvhk" podStartSLOduration=3.140552515 podStartE2EDuration="12.98596308s" podCreationTimestamp="2025-11-21 14:03:31 +0000 UTC" firstStartedPulling="2025-11-21 14:03:33.778216765 +0000 UTC m=+1887.899749327" lastFinishedPulling="2025-11-21 14:03:43.62362733 +0000 UTC m=+1897.745159892" observedRunningTime="2025-11-21 14:03:43.98150436 +0000 UTC m=+1898.103036922" watchObservedRunningTime="2025-11-21 14:03:43.98596308 +0000 UTC m=+1898.107495652" Nov 21 14:03:50 crc kubenswrapper[4904]: I1121 14:03:50.559328 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:50 crc kubenswrapper[4904]: I1121 14:03:50.868534 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnglv"] Nov 21 14:03:51 crc kubenswrapper[4904]: I1121 14:03:51.036132 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnglv" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="registry-server" containerID="cri-o://fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930" gracePeriod=2 Nov 21 14:03:51 crc kubenswrapper[4904]: I1121 14:03:51.899045 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:51 crc kubenswrapper[4904]: I1121 14:03:51.900048 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.030453 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.056144 4904 generic.go:334] "Generic (PLEG): container finished" podID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerID="fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930" exitCode=0 Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.056235 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerDied","Data":"fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930"} Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.056309 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnglv" event={"ID":"411c0ab2-837d-4ab1-8389-ac37d3aa4d54","Type":"ContainerDied","Data":"032586548b6dd6fd9043b8e5591f00d36daf9928fcd86b640c3d72625c5d28b2"} Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.056334 4904 scope.go:117] "RemoveContainer" containerID="fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.056483 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnglv" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.090874 4904 scope.go:117] "RemoveContainer" containerID="48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.139188 4904 scope.go:117] "RemoveContainer" containerID="0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.159965 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72j89\" (UniqueName: \"kubernetes.io/projected/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-kube-api-access-72j89\") pod \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.160050 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-catalog-content\") pod \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.160128 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-utilities\") pod \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\" (UID: \"411c0ab2-837d-4ab1-8389-ac37d3aa4d54\") " Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.161386 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-utilities" (OuterVolumeSpecName: "utilities") pod "411c0ab2-837d-4ab1-8389-ac37d3aa4d54" (UID: "411c0ab2-837d-4ab1-8389-ac37d3aa4d54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.169939 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-kube-api-access-72j89" (OuterVolumeSpecName: "kube-api-access-72j89") pod "411c0ab2-837d-4ab1-8389-ac37d3aa4d54" (UID: "411c0ab2-837d-4ab1-8389-ac37d3aa4d54"). InnerVolumeSpecName "kube-api-access-72j89". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.184359 4904 scope.go:117] "RemoveContainer" containerID="fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930" Nov 21 14:03:52 crc kubenswrapper[4904]: E1121 14:03:52.184882 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930\": container with ID starting with fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930 not found: ID does not exist" containerID="fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.184922 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930"} err="failed to get container status \"fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930\": rpc error: code = NotFound desc = could not find container \"fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930\": container with ID starting with fea76a63a3c0bbf4b1f2f978bd59fa13ff79fbd9ca5c390c9b32500b80f29930 not found: ID does not exist" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.184954 4904 scope.go:117] "RemoveContainer" containerID="48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e" Nov 21 14:03:52 crc kubenswrapper[4904]: E1121 14:03:52.185412 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e\": container with ID starting with 48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e not found: ID does not exist" containerID="48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.185478 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e"} err="failed to get container status \"48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e\": rpc error: code = NotFound desc = could not find container \"48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e\": container with ID starting with 48d938f034455b5db0455327be889e1b22fb5eafbdbe181f5d899d145035e74e not found: ID does not exist" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.185516 4904 scope.go:117] "RemoveContainer" containerID="0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c" Nov 21 14:03:52 crc kubenswrapper[4904]: E1121 14:03:52.185951 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c\": container with ID starting with 0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c not found: ID does not exist" containerID="0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.185985 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c"} err="failed to get container status \"0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c\": rpc error: code = NotFound desc = could not find container \"0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c\": container with ID starting with 0e1a3490b586d3e9446d5b46b67dce313922ab5dd0670097561bb3f5bdb6058c not found: ID does not exist" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.186905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "411c0ab2-837d-4ab1-8389-ac37d3aa4d54" (UID: "411c0ab2-837d-4ab1-8389-ac37d3aa4d54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.265247 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.265287 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.265297 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72j89\" (UniqueName: \"kubernetes.io/projected/411c0ab2-837d-4ab1-8389-ac37d3aa4d54-kube-api-access-72j89\") on node \"crc\" DevicePath \"\"" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.404208 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnglv"] Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.414407 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnglv"] Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.538495 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" path="/var/lib/kubelet/pods/411c0ab2-837d-4ab1-8389-ac37d3aa4d54/volumes" Nov 21 14:03:52 crc kubenswrapper[4904]: I1121 14:03:52.954733 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fvvhk" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="registry-server" probeResult="failure" output=< Nov 21 14:03:52 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:03:52 crc kubenswrapper[4904]: > Nov 21 14:04:01 crc kubenswrapper[4904]: I1121 14:04:01.965121 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:04:02 crc kubenswrapper[4904]: I1121 14:04:02.035304 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:04:02 crc kubenswrapper[4904]: I1121 14:04:02.755283 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvvhk"] Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.203396 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvvhk" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="registry-server" containerID="cri-o://2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a" gracePeriod=2 Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.796411 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.888602 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-catalog-content\") pod \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.888975 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-utilities\") pod \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.889108 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krcv6\" (UniqueName: \"kubernetes.io/projected/5c668860-8d8c-46a9-97c2-e7eb4decb43c-kube-api-access-krcv6\") pod \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\" (UID: \"5c668860-8d8c-46a9-97c2-e7eb4decb43c\") " Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.890075 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-utilities" (OuterVolumeSpecName: "utilities") pod "5c668860-8d8c-46a9-97c2-e7eb4decb43c" (UID: "5c668860-8d8c-46a9-97c2-e7eb4decb43c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.890634 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.896262 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c668860-8d8c-46a9-97c2-e7eb4decb43c-kube-api-access-krcv6" (OuterVolumeSpecName: "kube-api-access-krcv6") pod "5c668860-8d8c-46a9-97c2-e7eb4decb43c" (UID: "5c668860-8d8c-46a9-97c2-e7eb4decb43c"). InnerVolumeSpecName "kube-api-access-krcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.993944 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krcv6\" (UniqueName: \"kubernetes.io/projected/5c668860-8d8c-46a9-97c2-e7eb4decb43c-kube-api-access-krcv6\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:03 crc kubenswrapper[4904]: I1121 14:04:03.996543 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c668860-8d8c-46a9-97c2-e7eb4decb43c" (UID: "5c668860-8d8c-46a9-97c2-e7eb4decb43c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.096954 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c668860-8d8c-46a9-97c2-e7eb4decb43c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.229150 4904 generic.go:334] "Generic (PLEG): container finished" podID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerID="2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a" exitCode=0 Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.229242 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerDied","Data":"2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a"} Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.229288 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvvhk" event={"ID":"5c668860-8d8c-46a9-97c2-e7eb4decb43c","Type":"ContainerDied","Data":"bdbeb1c7ed69f3b7916d970d2df487624a5a31eff57dd2483bc37fbd45659abf"} Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.229339 4904 scope.go:117] "RemoveContainer" containerID="2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.229736 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvvhk" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.287428 4904 scope.go:117] "RemoveContainer" containerID="da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.299848 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvvhk"] Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.312316 4904 scope.go:117] "RemoveContainer" containerID="8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.316638 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvvhk"] Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.374235 4904 scope.go:117] "RemoveContainer" containerID="2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a" Nov 21 14:04:04 crc kubenswrapper[4904]: E1121 14:04:04.375010 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a\": container with ID starting with 2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a not found: ID does not exist" containerID="2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.375077 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a"} err="failed to get container status \"2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a\": rpc error: code = NotFound desc = could not find container \"2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a\": container with ID starting with 2880d1c9345c8719cd82bd6d49e21197f68e0288827d2ad16ecc08fd27b4963a not found: ID does not exist" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.375120 4904 scope.go:117] "RemoveContainer" containerID="da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94" Nov 21 14:04:04 crc kubenswrapper[4904]: E1121 14:04:04.375688 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94\": container with ID starting with da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94 not found: ID does not exist" containerID="da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.375754 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94"} err="failed to get container status \"da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94\": rpc error: code = NotFound desc = could not find container \"da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94\": container with ID starting with da70205656ab0150525b5b7cb4ec5349a7f5f29d7120ec2ef322da401ec56a94 not found: ID does not exist" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.375787 4904 scope.go:117] "RemoveContainer" containerID="8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71" Nov 21 14:04:04 crc kubenswrapper[4904]: E1121 14:04:04.376176 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71\": container with ID starting with 8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71 not found: ID does not exist" containerID="8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.376236 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71"} err="failed to get container status \"8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71\": rpc error: code = NotFound desc = could not find container \"8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71\": container with ID starting with 8c445f6e46e36aef222dd3a14ac257bab32d228b27086b4dd6e07dd40b049d71 not found: ID does not exist" Nov 21 14:04:04 crc kubenswrapper[4904]: I1121 14:04:04.528139 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" path="/var/lib/kubelet/pods/5c668860-8d8c-46a9-97c2-e7eb4decb43c/volumes" Nov 21 14:04:21 crc kubenswrapper[4904]: I1121 14:04:21.076801 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bnss2"] Nov 21 14:04:21 crc kubenswrapper[4904]: I1121 14:04:21.093974 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-b2e7-account-create-htps7"] Nov 21 14:04:21 crc kubenswrapper[4904]: I1121 14:04:21.106938 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-b2e7-account-create-htps7"] Nov 21 14:04:21 crc kubenswrapper[4904]: I1121 14:04:21.120423 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-bnss2"] Nov 21 14:04:22 crc kubenswrapper[4904]: I1121 14:04:22.534591 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a702c6-8a09-49ca-9e41-4d35f38c39db" path="/var/lib/kubelet/pods/23a702c6-8a09-49ca-9e41-4d35f38c39db/volumes" Nov 21 14:04:22 crc kubenswrapper[4904]: I1121 14:04:22.535276 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7d795f1-b251-48e1-a516-911f462ed052" path="/var/lib/kubelet/pods/e7d795f1-b251-48e1-a516-911f462ed052/volumes" Nov 21 14:04:25 crc kubenswrapper[4904]: I1121 14:04:25.891546 4904 scope.go:117] "RemoveContainer" containerID="7d810b64082ac75247b55dfc2e769b02f46584bf89364de67343b6bb418feeb9" Nov 21 14:04:25 crc kubenswrapper[4904]: I1121 14:04:25.935129 4904 scope.go:117] "RemoveContainer" containerID="73aa406094e52a1b1b90d94bb93a9751331f3cd6418c35513ed2b57d1fd38f05" Nov 21 14:04:25 crc kubenswrapper[4904]: I1121 14:04:25.995131 4904 scope.go:117] "RemoveContainer" containerID="c8daccdc7702d51206d0348d52998a120b65f15d105105652611e4619efe976f" Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.052883 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ebc5-account-create-tx8n2"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.074694 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cb90-account-create-jfxjg"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.087853 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xt26t"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.099508 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f956-account-create-w8cb6"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.115626 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tmc7t"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.130596 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xt26t"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.141766 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cb90-account-create-jfxjg"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.152703 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f956-account-create-w8cb6"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.161297 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tmc7t"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.169244 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ebc5-account-create-tx8n2"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.179459 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mnnlk"] Nov 21 14:04:27 crc kubenswrapper[4904]: I1121 14:04:27.188823 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mnnlk"] Nov 21 14:04:28 crc kubenswrapper[4904]: I1121 14:04:28.531047 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e9748c-0870-4dc2-bdc8-4843d15bd49f" path="/var/lib/kubelet/pods/20e9748c-0870-4dc2-bdc8-4843d15bd49f/volumes" Nov 21 14:04:28 crc kubenswrapper[4904]: I1121 14:04:28.533104 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b125736-8015-45e0-987a-c795d3aabcc8" path="/var/lib/kubelet/pods/5b125736-8015-45e0-987a-c795d3aabcc8/volumes" Nov 21 14:04:28 crc kubenswrapper[4904]: I1121 14:04:28.534372 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4959da-6d83-452d-a3e8-4796466c7d2f" path="/var/lib/kubelet/pods/be4959da-6d83-452d-a3e8-4796466c7d2f/volumes" Nov 21 14:04:28 crc kubenswrapper[4904]: I1121 14:04:28.535713 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81d71ad-5a52-4774-a30e-19a718fdd6f3" path="/var/lib/kubelet/pods/c81d71ad-5a52-4774-a30e-19a718fdd6f3/volumes" Nov 21 14:04:28 crc kubenswrapper[4904]: I1121 14:04:28.538385 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40cc279-7f5b-4dfd-9e1f-18ab4180b067" path="/var/lib/kubelet/pods/e40cc279-7f5b-4dfd-9e1f-18ab4180b067/volumes" Nov 21 14:04:28 crc kubenswrapper[4904]: I1121 14:04:28.539144 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7d1006-0238-446c-a9f8-da640c971ed2" path="/var/lib/kubelet/pods/ec7d1006-0238-446c-a9f8-da640c971ed2/volumes" Nov 21 14:04:30 crc kubenswrapper[4904]: I1121 14:04:30.038288 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-8b64-account-create-bcpmm"] Nov 21 14:04:30 crc kubenswrapper[4904]: I1121 14:04:30.050725 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-54rdk"] Nov 21 14:04:30 crc kubenswrapper[4904]: I1121 14:04:30.064814 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-54rdk"] Nov 21 14:04:30 crc kubenswrapper[4904]: I1121 14:04:30.075254 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-8b64-account-create-bcpmm"] Nov 21 14:04:30 crc kubenswrapper[4904]: I1121 14:04:30.527207 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a" path="/var/lib/kubelet/pods/3d06a296-4f71-4ce1-93e9-d79ca7e1ba9a/volumes" Nov 21 14:04:30 crc kubenswrapper[4904]: I1121 14:04:30.527994 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3dd6ef-3fdf-4edb-8517-9b411146916f" path="/var/lib/kubelet/pods/9d3dd6ef-3fdf-4edb-8517-9b411146916f/volumes" Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.071953 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-qnrm6"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.087454 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-0480-account-create-whgf4"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.106265 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-717d-account-create-cfcmh"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.118699 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-a701-account-create-bsz8k"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.132297 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-nwl7h"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.145686 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-zr5d7"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.158012 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-13a7-account-create-qbk7f"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.166105 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-0480-account-create-whgf4"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.173775 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-qnrm6"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.181738 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-zr5d7"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.190597 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-nwl7h"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.201632 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-717d-account-create-cfcmh"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.210954 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-a701-account-create-bsz8k"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.219496 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-13a7-account-create-qbk7f"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.228039 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9zzvb"] Nov 21 14:04:57 crc kubenswrapper[4904]: I1121 14:04:57.237790 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9zzvb"] Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.113604 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.113701 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.533979 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ad7f52-daab-4bea-a519-5adf0f636168" path="/var/lib/kubelet/pods/19ad7f52-daab-4bea-a519-5adf0f636168/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.535720 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d76b81c-875e-4554-9b27-1334b70ef872" path="/var/lib/kubelet/pods/1d76b81c-875e-4554-9b27-1334b70ef872/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.537647 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9cc95e-4ef2-44ef-a1e0-14a09425771b" path="/var/lib/kubelet/pods/4c9cc95e-4ef2-44ef-a1e0-14a09425771b/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.539367 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8852fc8c-175e-4cf7-854d-230dd3c32d08" path="/var/lib/kubelet/pods/8852fc8c-175e-4cf7-854d-230dd3c32d08/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.542722 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90da1076-1698-403c-9b8b-6a93b89c47cf" path="/var/lib/kubelet/pods/90da1076-1698-403c-9b8b-6a93b89c47cf/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.544039 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f4eaf0-e81c-4a85-a91a-7ca22ced143c" path="/var/lib/kubelet/pods/91f4eaf0-e81c-4a85-a91a-7ca22ced143c/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.545302 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f71b15da-cc9c-4358-a95d-36baef7254e4" path="/var/lib/kubelet/pods/f71b15da-cc9c-4358-a95d-36baef7254e4/volumes" Nov 21 14:04:58 crc kubenswrapper[4904]: I1121 14:04:58.547251 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba76450-d33c-46a2-ab88-77f4390f174e" path="/var/lib/kubelet/pods/fba76450-d33c-46a2-ab88-77f4390f174e/volumes" Nov 21 14:05:10 crc kubenswrapper[4904]: I1121 14:05:10.062486 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-v45q6"] Nov 21 14:05:10 crc kubenswrapper[4904]: I1121 14:05:10.077220 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-v45q6"] Nov 21 14:05:10 crc kubenswrapper[4904]: I1121 14:05:10.528520 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ccebc8d-97ab-4df0-be88-3e6147a45b7a" path="/var/lib/kubelet/pods/7ccebc8d-97ab-4df0-be88-3e6147a45b7a/volumes" Nov 21 14:05:15 crc kubenswrapper[4904]: I1121 14:05:15.050788 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ssmh2"] Nov 21 14:05:15 crc kubenswrapper[4904]: I1121 14:05:15.066447 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ssmh2"] Nov 21 14:05:16 crc kubenswrapper[4904]: I1121 14:05:16.537816 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34287b99-9180-4bba-ae2d-bbe3eda9056f" path="/var/lib/kubelet/pods/34287b99-9180-4bba-ae2d-bbe3eda9056f/volumes" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.119680 4904 scope.go:117] "RemoveContainer" containerID="397caf680fc273b5e9e5bbcd53f1b327ab961116f73bdf8007fb1a10138ff398" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.157544 4904 scope.go:117] "RemoveContainer" containerID="8d0e41b006c0d07dc2ff3e3f6200bf9798082a398991e9c332d83f61551ff435" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.227344 4904 scope.go:117] "RemoveContainer" containerID="bb4e65fdd1b30a3485f4753a530c4927cdc2da17b8abc489cea3f8b962c2b70f" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.274446 4904 scope.go:117] "RemoveContainer" containerID="83a0dba8f011de55212a24290fb249b9eb660b64ebe5d4363f6d6aefc2754fb1" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.364063 4904 scope.go:117] "RemoveContainer" containerID="14d5b92e7f3ee6d42f3235f09f83911e3a1c3d5911527c6984e827a8e19bc0bb" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.397216 4904 scope.go:117] "RemoveContainer" containerID="13eca21696bfd43d8c0a5eb2d1633e75c3b876688948eccbd3b1f49182d2d886" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.454991 4904 scope.go:117] "RemoveContainer" containerID="0887d28311b1d16d6df90fe17f093f78237f8638ae47f50c888b030f9f593206" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.478636 4904 scope.go:117] "RemoveContainer" containerID="a63f4ac02624e4d791ceaaf14a9ac2c154e40870f9236382e4845c5d1ff74034" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.500851 4904 scope.go:117] "RemoveContainer" containerID="a4a023eff86293a3fc6300e72d96113a1137ba9c84da6ba2ad22a6a150b2c703" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.529059 4904 scope.go:117] "RemoveContainer" containerID="9e3d0ec26eef04c65c4a2cc143bdf82f908cacf19dcf47bcfbc78a5ab77dbc28" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.553723 4904 scope.go:117] "RemoveContainer" containerID="8db646191f1bc05d30dd17fe929c156c8c38bdb46013226d86c400536c3c706e" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.575385 4904 scope.go:117] "RemoveContainer" containerID="0533f6d274ee3b6625dd22dc9148a53e2d1e773440d50b82002581f1f0d2ce51" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.599556 4904 scope.go:117] "RemoveContainer" containerID="d30a07e889e5a294f3fcbd6e6669fb24d224e91d1288c491b8eb1a063a595977" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.628021 4904 scope.go:117] "RemoveContainer" containerID="5f376de77b9f94a9f4ef54754937f734b69fddfc3a4cbbf527161c7185e6f750" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.661137 4904 scope.go:117] "RemoveContainer" containerID="2e8d1a7ed4b163160a12b4ade8267b8a321c514cfce88e0bef996eab97222411" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.703307 4904 scope.go:117] "RemoveContainer" containerID="494b623e68bbfc824b87c5f788cc36327839f708dca7ef150ea6088448d6a0e2" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.731786 4904 scope.go:117] "RemoveContainer" containerID="a49f58ece5968197c64b10a36fe0eb02bda642d6920ed2da01709784b866a965" Nov 21 14:05:26 crc kubenswrapper[4904]: I1121 14:05:26.766542 4904 scope.go:117] "RemoveContainer" containerID="4c091526d7411c4077c21ef005ceec5fdf7eb3ff0e45e410c0bf2d6e315bee0b" Nov 21 14:05:28 crc kubenswrapper[4904]: I1121 14:05:28.113512 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:05:28 crc kubenswrapper[4904]: I1121 14:05:28.113599 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:05:38 crc kubenswrapper[4904]: I1121 14:05:38.546110 4904 generic.go:334] "Generic (PLEG): container finished" podID="59615e38-ebdb-4ce9-9922-942c2ff0d82c" containerID="f2f8f2857561e04916f01f486dba33f942a00347eb1f6b6fcddbbd7376ec5d3a" exitCode=0 Nov 21 14:05:38 crc kubenswrapper[4904]: I1121 14:05:38.546188 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" event={"ID":"59615e38-ebdb-4ce9-9922-942c2ff0d82c","Type":"ContainerDied","Data":"f2f8f2857561e04916f01f486dba33f942a00347eb1f6b6fcddbbd7376ec5d3a"} Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.059280 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.137578 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-ssh-key\") pod \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.137834 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qfp7\" (UniqueName: \"kubernetes.io/projected/59615e38-ebdb-4ce9-9922-942c2ff0d82c-kube-api-access-5qfp7\") pod \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.137936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-inventory\") pod \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.138444 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-bootstrap-combined-ca-bundle\") pod \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\" (UID: \"59615e38-ebdb-4ce9-9922-942c2ff0d82c\") " Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.146675 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "59615e38-ebdb-4ce9-9922-942c2ff0d82c" (UID: "59615e38-ebdb-4ce9-9922-942c2ff0d82c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.146893 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59615e38-ebdb-4ce9-9922-942c2ff0d82c-kube-api-access-5qfp7" (OuterVolumeSpecName: "kube-api-access-5qfp7") pod "59615e38-ebdb-4ce9-9922-942c2ff0d82c" (UID: "59615e38-ebdb-4ce9-9922-942c2ff0d82c"). InnerVolumeSpecName "kube-api-access-5qfp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.175198 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "59615e38-ebdb-4ce9-9922-942c2ff0d82c" (UID: "59615e38-ebdb-4ce9-9922-942c2ff0d82c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.184525 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-inventory" (OuterVolumeSpecName: "inventory") pod "59615e38-ebdb-4ce9-9922-942c2ff0d82c" (UID: "59615e38-ebdb-4ce9-9922-942c2ff0d82c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.241372 4904 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.241637 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.241701 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qfp7\" (UniqueName: \"kubernetes.io/projected/59615e38-ebdb-4ce9-9922-942c2ff0d82c-kube-api-access-5qfp7\") on node \"crc\" DevicePath \"\"" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.241745 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/59615e38-ebdb-4ce9-9922-942c2ff0d82c-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.577259 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" event={"ID":"59615e38-ebdb-4ce9-9922-942c2ff0d82c","Type":"ContainerDied","Data":"8bfa4cf89de7b47d7b88d4405dcb9f887229bdb37a821cdc5f29591d43493b9f"} Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.577835 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bfa4cf89de7b47d7b88d4405dcb9f887229bdb37a821cdc5f29591d43493b9f" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.577369 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.687364 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9"] Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688119 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="extract-utilities" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688149 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="extract-utilities" Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688164 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="extract-utilities" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688173 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="extract-utilities" Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688183 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="extract-content" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688191 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="extract-content" Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688205 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="registry-server" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688212 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="registry-server" Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688239 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="registry-server" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688247 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="registry-server" Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688258 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59615e38-ebdb-4ce9-9922-942c2ff0d82c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688267 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="59615e38-ebdb-4ce9-9922-942c2ff0d82c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:05:40 crc kubenswrapper[4904]: E1121 14:05:40.688285 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="extract-content" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688292 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="extract-content" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688551 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="411c0ab2-837d-4ab1-8389-ac37d3aa4d54" containerName="registry-server" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688571 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="59615e38-ebdb-4ce9-9922-942c2ff0d82c" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.688580 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c668860-8d8c-46a9-97c2-e7eb4decb43c" containerName="registry-server" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.689853 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.695690 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.695923 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.696065 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.696201 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.702037 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9"] Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.754835 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.754996 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.755731 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrw6\" (UniqueName: \"kubernetes.io/projected/6afae033-a8c3-4acb-a441-e7178c4bd031-kube-api-access-kzrw6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.857844 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrw6\" (UniqueName: \"kubernetes.io/projected/6afae033-a8c3-4acb-a441-e7178c4bd031-kube-api-access-kzrw6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.857922 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.858059 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.863172 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.864159 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:40 crc kubenswrapper[4904]: I1121 14:05:40.878748 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrw6\" (UniqueName: \"kubernetes.io/projected/6afae033-a8c3-4acb-a441-e7178c4bd031-kube-api-access-kzrw6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:41 crc kubenswrapper[4904]: I1121 14:05:41.011478 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:05:41 crc kubenswrapper[4904]: I1121 14:05:41.635434 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9"] Nov 21 14:05:42 crc kubenswrapper[4904]: I1121 14:05:42.605470 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" event={"ID":"6afae033-a8c3-4acb-a441-e7178c4bd031","Type":"ContainerStarted","Data":"fc38a30100d602ffc813ed69dcd6e6c3f742e20347f8f2faba89295ad5e70558"} Nov 21 14:05:42 crc kubenswrapper[4904]: I1121 14:05:42.606157 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" event={"ID":"6afae033-a8c3-4acb-a441-e7178c4bd031","Type":"ContainerStarted","Data":"8df7c510ba1120b2c3c5e03c058188430b4f1bc1f38f204a8213b2cb6c2e9ce1"} Nov 21 14:05:42 crc kubenswrapper[4904]: I1121 14:05:42.632350 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" podStartSLOduration=2.050011636 podStartE2EDuration="2.632325808s" podCreationTimestamp="2025-11-21 14:05:40 +0000 UTC" firstStartedPulling="2025-11-21 14:05:41.635897155 +0000 UTC m=+2015.757429707" lastFinishedPulling="2025-11-21 14:05:42.218211327 +0000 UTC m=+2016.339743879" observedRunningTime="2025-11-21 14:05:42.62872883 +0000 UTC m=+2016.750261392" watchObservedRunningTime="2025-11-21 14:05:42.632325808 +0000 UTC m=+2016.753858360" Nov 21 14:05:49 crc kubenswrapper[4904]: I1121 14:05:49.051744 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-6plvv"] Nov 21 14:05:49 crc kubenswrapper[4904]: I1121 14:05:49.065343 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-6plvv"] Nov 21 14:05:50 crc kubenswrapper[4904]: I1121 14:05:50.525616 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7baa95-27e6-491e-a934-d79a287ca62d" path="/var/lib/kubelet/pods/cf7baa95-27e6-491e-a934-d79a287ca62d/volumes" Nov 21 14:05:55 crc kubenswrapper[4904]: I1121 14:05:55.050880 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4hqqk"] Nov 21 14:05:55 crc kubenswrapper[4904]: I1121 14:05:55.065213 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4hqqk"] Nov 21 14:05:56 crc kubenswrapper[4904]: I1121 14:05:56.529412 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9067c4e-8264-43c2-b86a-7950915f1a31" path="/var/lib/kubelet/pods/a9067c4e-8264-43c2-b86a-7950915f1a31/volumes" Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.113231 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.113327 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.113392 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.114249 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c440976231c18075c6b34421dc688eb2b29ec89c3d96f4545c581aa060fb19c0"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.114312 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://c440976231c18075c6b34421dc688eb2b29ec89c3d96f4545c581aa060fb19c0" gracePeriod=600 Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.892422 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="c440976231c18075c6b34421dc688eb2b29ec89c3d96f4545c581aa060fb19c0" exitCode=0 Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.892499 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"c440976231c18075c6b34421dc688eb2b29ec89c3d96f4545c581aa060fb19c0"} Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.893418 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b"} Nov 21 14:05:58 crc kubenswrapper[4904]: I1121 14:05:58.893460 4904 scope.go:117] "RemoveContainer" containerID="0e9a4317721c035d31d2686d3765ca9c0de38a913e1a907b28cd315206f87991" Nov 21 14:06:16 crc kubenswrapper[4904]: I1121 14:06:16.046221 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2s659"] Nov 21 14:06:16 crc kubenswrapper[4904]: I1121 14:06:16.067151 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2s659"] Nov 21 14:06:16 crc kubenswrapper[4904]: I1121 14:06:16.527509 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b427a9-510f-45b1-82bc-cf85bb44932b" path="/var/lib/kubelet/pods/20b427a9-510f-45b1-82bc-cf85bb44932b/volumes" Nov 21 14:06:27 crc kubenswrapper[4904]: I1121 14:06:27.198271 4904 scope.go:117] "RemoveContainer" containerID="417542c8dded4121e13e726802629777aa7fb90706ee8f4936852ab2a729dea7" Nov 21 14:06:27 crc kubenswrapper[4904]: I1121 14:06:27.264518 4904 scope.go:117] "RemoveContainer" containerID="65c3cd28e63e9c5e610aef68f4a79be6a81f554865470aaf05ac99f6f863d3d7" Nov 21 14:06:27 crc kubenswrapper[4904]: I1121 14:06:27.326206 4904 scope.go:117] "RemoveContainer" containerID="1c243466d3d8eb830ee5b9a97e928c7aa949fcb9dc08023366b3f52f50a398f3" Nov 21 14:06:39 crc kubenswrapper[4904]: I1121 14:06:39.051489 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lj7p5"] Nov 21 14:06:39 crc kubenswrapper[4904]: I1121 14:06:39.062480 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lj7p5"] Nov 21 14:06:40 crc kubenswrapper[4904]: I1121 14:06:40.532965 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6201ad46-9eaf-4b17-b40f-e31756dea737" path="/var/lib/kubelet/pods/6201ad46-9eaf-4b17-b40f-e31756dea737/volumes" Nov 21 14:06:51 crc kubenswrapper[4904]: I1121 14:06:51.046287 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qg6bd"] Nov 21 14:06:51 crc kubenswrapper[4904]: I1121 14:06:51.059314 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qg6bd"] Nov 21 14:06:52 crc kubenswrapper[4904]: I1121 14:06:52.262191 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" podUID="e819c802-1c71-4668-bc99-5b41cc11c656" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 21 14:06:52 crc kubenswrapper[4904]: I1121 14:06:52.530598 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46d63715-407d-4908-a38f-5f6fd76729db" path="/var/lib/kubelet/pods/46d63715-407d-4908-a38f-5f6fd76729db/volumes" Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.053694 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-dvqn9"] Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.065642 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8fa3-account-create-z2zh6"] Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.076629 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2wkx4"] Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.085736 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2wkx4"] Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.093953 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8fa3-account-create-z2zh6"] Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.101310 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-dvqn9"] Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.754819 4904 generic.go:334] "Generic (PLEG): container finished" podID="6afae033-a8c3-4acb-a441-e7178c4bd031" containerID="fc38a30100d602ffc813ed69dcd6e6c3f742e20347f8f2faba89295ad5e70558" exitCode=0 Nov 21 14:07:11 crc kubenswrapper[4904]: I1121 14:07:11.754894 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" event={"ID":"6afae033-a8c3-4acb-a441-e7178c4bd031","Type":"ContainerDied","Data":"fc38a30100d602ffc813ed69dcd6e6c3f742e20347f8f2faba89295ad5e70558"} Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.036331 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4njzh"] Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.048078 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4569-account-create-kkf9c"] Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.082927 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4njzh"] Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.108923 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-f88f-account-create-ljzdz"] Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.121526 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-f88f-account-create-ljzdz"] Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.133237 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4569-account-create-kkf9c"] Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.527541 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19585c9c-688b-45f2-bc52-83be15f37165" path="/var/lib/kubelet/pods/19585c9c-688b-45f2-bc52-83be15f37165/volumes" Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.528220 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a8325f0-6c0c-4ae0-98b5-be6835297e21" path="/var/lib/kubelet/pods/2a8325f0-6c0c-4ae0-98b5-be6835297e21/volumes" Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.528823 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e22a1fb-12bb-4241-bb3c-8a659b96630b" path="/var/lib/kubelet/pods/5e22a1fb-12bb-4241-bb3c-8a659b96630b/volumes" Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.529440 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ae1c31-c9da-4ab8-ae06-1d287d556e56" path="/var/lib/kubelet/pods/60ae1c31-c9da-4ab8-ae06-1d287d556e56/volumes" Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.530664 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8afdd3-5682-4977-b883-b58fb1f25857" path="/var/lib/kubelet/pods/ae8afdd3-5682-4977-b883-b58fb1f25857/volumes" Nov 21 14:07:12 crc kubenswrapper[4904]: I1121 14:07:12.531276 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95d5901-0cb0-4e5f-82ab-be2364b11c5e" path="/var/lib/kubelet/pods/b95d5901-0cb0-4e5f-82ab-be2364b11c5e/volumes" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.276233 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.343499 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-ssh-key\") pod \"6afae033-a8c3-4acb-a441-e7178c4bd031\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.344193 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-inventory\") pod \"6afae033-a8c3-4acb-a441-e7178c4bd031\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.344413 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrw6\" (UniqueName: \"kubernetes.io/projected/6afae033-a8c3-4acb-a441-e7178c4bd031-kube-api-access-kzrw6\") pod \"6afae033-a8c3-4acb-a441-e7178c4bd031\" (UID: \"6afae033-a8c3-4acb-a441-e7178c4bd031\") " Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.357729 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6afae033-a8c3-4acb-a441-e7178c4bd031-kube-api-access-kzrw6" (OuterVolumeSpecName: "kube-api-access-kzrw6") pod "6afae033-a8c3-4acb-a441-e7178c4bd031" (UID: "6afae033-a8c3-4acb-a441-e7178c4bd031"). InnerVolumeSpecName "kube-api-access-kzrw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.384522 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-inventory" (OuterVolumeSpecName: "inventory") pod "6afae033-a8c3-4acb-a441-e7178c4bd031" (UID: "6afae033-a8c3-4acb-a441-e7178c4bd031"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.388677 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6afae033-a8c3-4acb-a441-e7178c4bd031" (UID: "6afae033-a8c3-4acb-a441-e7178c4bd031"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.447445 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.447490 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6afae033-a8c3-4acb-a441-e7178c4bd031-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.447505 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrw6\" (UniqueName: \"kubernetes.io/projected/6afae033-a8c3-4acb-a441-e7178c4bd031-kube-api-access-kzrw6\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.801761 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" event={"ID":"6afae033-a8c3-4acb-a441-e7178c4bd031","Type":"ContainerDied","Data":"8df7c510ba1120b2c3c5e03c058188430b4f1bc1f38f204a8213b2cb6c2e9ce1"} Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.801814 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8df7c510ba1120b2c3c5e03c058188430b4f1bc1f38f204a8213b2cb6c2e9ce1" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.801812 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.886893 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9"] Nov 21 14:07:13 crc kubenswrapper[4904]: E1121 14:07:13.887630 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6afae033-a8c3-4acb-a441-e7178c4bd031" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.887683 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="6afae033-a8c3-4acb-a441-e7178c4bd031" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.887967 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="6afae033-a8c3-4acb-a441-e7178c4bd031" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.889139 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.891981 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.892244 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.892425 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.893001 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.899458 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9"] Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.962061 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.962152 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g269q\" (UniqueName: \"kubernetes.io/projected/eb5007ed-771b-41ad-be4b-f2d008a9217b-kube-api-access-g269q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:13 crc kubenswrapper[4904]: I1121 14:07:13.962218 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.065323 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.065467 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g269q\" (UniqueName: \"kubernetes.io/projected/eb5007ed-771b-41ad-be4b-f2d008a9217b-kube-api-access-g269q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.065514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.071701 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.073526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.085868 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g269q\" (UniqueName: \"kubernetes.io/projected/eb5007ed-771b-41ad-be4b-f2d008a9217b-kube-api-access-g269q\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bzls9\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.211282 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:14 crc kubenswrapper[4904]: I1121 14:07:14.954589 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9"] Nov 21 14:07:14 crc kubenswrapper[4904]: W1121 14:07:14.962090 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5007ed_771b_41ad_be4b_f2d008a9217b.slice/crio-0e8acee2dd20533442a1d6d75044b91cf03d83b470d3e17cfd2bbe1892eaaeff WatchSource:0}: Error finding container 0e8acee2dd20533442a1d6d75044b91cf03d83b470d3e17cfd2bbe1892eaaeff: Status 404 returned error can't find the container with id 0e8acee2dd20533442a1d6d75044b91cf03d83b470d3e17cfd2bbe1892eaaeff Nov 21 14:07:15 crc kubenswrapper[4904]: I1121 14:07:15.830806 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" event={"ID":"eb5007ed-771b-41ad-be4b-f2d008a9217b","Type":"ContainerStarted","Data":"74d4fd0e067790b8271e124c2f85954f4ea4a61a98713def6057ecddf7e52b9f"} Nov 21 14:07:15 crc kubenswrapper[4904]: I1121 14:07:15.831241 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" event={"ID":"eb5007ed-771b-41ad-be4b-f2d008a9217b","Type":"ContainerStarted","Data":"0e8acee2dd20533442a1d6d75044b91cf03d83b470d3e17cfd2bbe1892eaaeff"} Nov 21 14:07:15 crc kubenswrapper[4904]: I1121 14:07:15.864867 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" podStartSLOduration=2.450043573 podStartE2EDuration="2.86482996s" podCreationTimestamp="2025-11-21 14:07:13 +0000 UTC" firstStartedPulling="2025-11-21 14:07:14.965534763 +0000 UTC m=+2109.087067315" lastFinishedPulling="2025-11-21 14:07:15.38032115 +0000 UTC m=+2109.501853702" observedRunningTime="2025-11-21 14:07:15.853905462 +0000 UTC m=+2109.975438014" watchObservedRunningTime="2025-11-21 14:07:15.86482996 +0000 UTC m=+2109.986362522" Nov 21 14:07:20 crc kubenswrapper[4904]: I1121 14:07:20.898779 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb5007ed-771b-41ad-be4b-f2d008a9217b" containerID="74d4fd0e067790b8271e124c2f85954f4ea4a61a98713def6057ecddf7e52b9f" exitCode=0 Nov 21 14:07:20 crc kubenswrapper[4904]: I1121 14:07:20.898853 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" event={"ID":"eb5007ed-771b-41ad-be4b-f2d008a9217b","Type":"ContainerDied","Data":"74d4fd0e067790b8271e124c2f85954f4ea4a61a98713def6057ecddf7e52b9f"} Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.622438 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.701179 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g269q\" (UniqueName: \"kubernetes.io/projected/eb5007ed-771b-41ad-be4b-f2d008a9217b-kube-api-access-g269q\") pod \"eb5007ed-771b-41ad-be4b-f2d008a9217b\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.701236 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-inventory\") pod \"eb5007ed-771b-41ad-be4b-f2d008a9217b\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.701357 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-ssh-key\") pod \"eb5007ed-771b-41ad-be4b-f2d008a9217b\" (UID: \"eb5007ed-771b-41ad-be4b-f2d008a9217b\") " Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.720614 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5007ed-771b-41ad-be4b-f2d008a9217b-kube-api-access-g269q" (OuterVolumeSpecName: "kube-api-access-g269q") pod "eb5007ed-771b-41ad-be4b-f2d008a9217b" (UID: "eb5007ed-771b-41ad-be4b-f2d008a9217b"). InnerVolumeSpecName "kube-api-access-g269q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.746769 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-inventory" (OuterVolumeSpecName: "inventory") pod "eb5007ed-771b-41ad-be4b-f2d008a9217b" (UID: "eb5007ed-771b-41ad-be4b-f2d008a9217b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.749480 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb5007ed-771b-41ad-be4b-f2d008a9217b" (UID: "eb5007ed-771b-41ad-be4b-f2d008a9217b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.806069 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g269q\" (UniqueName: \"kubernetes.io/projected/eb5007ed-771b-41ad-be4b-f2d008a9217b-kube-api-access-g269q\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.806114 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.806123 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb5007ed-771b-41ad-be4b-f2d008a9217b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.929940 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" event={"ID":"eb5007ed-771b-41ad-be4b-f2d008a9217b","Type":"ContainerDied","Data":"0e8acee2dd20533442a1d6d75044b91cf03d83b470d3e17cfd2bbe1892eaaeff"} Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.929982 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9" Nov 21 14:07:22 crc kubenswrapper[4904]: I1121 14:07:22.929993 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e8acee2dd20533442a1d6d75044b91cf03d83b470d3e17cfd2bbe1892eaaeff" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.110363 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl"] Nov 21 14:07:23 crc kubenswrapper[4904]: E1121 14:07:23.110878 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5007ed-771b-41ad-be4b-f2d008a9217b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.110899 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5007ed-771b-41ad-be4b-f2d008a9217b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.111119 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5007ed-771b-41ad-be4b-f2d008a9217b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.111945 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.113909 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.113970 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.116565 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.118883 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.123996 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl"] Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.215577 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.215679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.215772 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhkwl\" (UniqueName: \"kubernetes.io/projected/c093339a-118d-434d-a647-fff82c7882d2-kube-api-access-lhkwl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.317797 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.317901 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.318370 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhkwl\" (UniqueName: \"kubernetes.io/projected/c093339a-118d-434d-a647-fff82c7882d2-kube-api-access-lhkwl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.328233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.335228 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.378800 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhkwl\" (UniqueName: \"kubernetes.io/projected/c093339a-118d-434d-a647-fff82c7882d2-kube-api-access-lhkwl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-9lwsl\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:23 crc kubenswrapper[4904]: I1121 14:07:23.448546 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:07:24 crc kubenswrapper[4904]: I1121 14:07:24.039950 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl"] Nov 21 14:07:24 crc kubenswrapper[4904]: I1121 14:07:24.951333 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" event={"ID":"c093339a-118d-434d-a647-fff82c7882d2","Type":"ContainerStarted","Data":"307f342d267b110f16c6a9854233bd486456514c2fea0c36e6f514bb39a89da8"} Nov 21 14:07:24 crc kubenswrapper[4904]: I1121 14:07:24.951683 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" event={"ID":"c093339a-118d-434d-a647-fff82c7882d2","Type":"ContainerStarted","Data":"af8559bd9f99322c5bf991b9f9a622c376fdc260d19a704f3f71d785ae2a073d"} Nov 21 14:07:24 crc kubenswrapper[4904]: I1121 14:07:24.980888 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" podStartSLOduration=1.575875204 podStartE2EDuration="1.980858521s" podCreationTimestamp="2025-11-21 14:07:23 +0000 UTC" firstStartedPulling="2025-11-21 14:07:24.047472696 +0000 UTC m=+2118.169005248" lastFinishedPulling="2025-11-21 14:07:24.452455993 +0000 UTC m=+2118.573988565" observedRunningTime="2025-11-21 14:07:24.97063646 +0000 UTC m=+2119.092169032" watchObservedRunningTime="2025-11-21 14:07:24.980858521 +0000 UTC m=+2119.102391073" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.461841 4904 scope.go:117] "RemoveContainer" containerID="c42ed548f2712bf7945ce966c0e0510f943821ae46678a15b6272ce972ee904b" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.504557 4904 scope.go:117] "RemoveContainer" containerID="5f4858f17160180eb95a8561d47f3cc093db3518d06fe831b442671d897d6fa8" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.558263 4904 scope.go:117] "RemoveContainer" containerID="21fcee51c5fe36cec2c9f8021ad75caab1b04e83dc55f599a6e326cf98eedc5e" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.634835 4904 scope.go:117] "RemoveContainer" containerID="674310f15dc50a6947a9806d3777baa8f9988f25218411836f49bf8fb95c31a7" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.767815 4904 scope.go:117] "RemoveContainer" containerID="352cf467b470ed1d24d04fcae3346c00819fb465468e8dee548f4b08d1a83292" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.827872 4904 scope.go:117] "RemoveContainer" containerID="552cae89d7343a633d6ec565dc2b95e53b3aad993589a3df2126af186255c0d1" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.870430 4904 scope.go:117] "RemoveContainer" containerID="bb9939d32bb444c076034ef85a7c1986f8b67a8f7a40495d009f6a31dfd19193" Nov 21 14:07:27 crc kubenswrapper[4904]: I1121 14:07:27.892343 4904 scope.go:117] "RemoveContainer" containerID="ca7af29aef96e3d1553280ad437c7650f3048e7269273edad710154ceaf879fb" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.625111 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z2c9w"] Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.630331 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.668075 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2c9w"] Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.796465 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-utilities\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.796536 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnv5h\" (UniqueName: \"kubernetes.io/projected/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-kube-api-access-lnv5h\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.796996 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-catalog-content\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.899500 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-utilities\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.899674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnv5h\" (UniqueName: \"kubernetes.io/projected/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-kube-api-access-lnv5h\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.899763 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-catalog-content\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.900847 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-utilities\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.901157 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-catalog-content\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.925218 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnv5h\" (UniqueName: \"kubernetes.io/projected/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-kube-api-access-lnv5h\") pod \"certified-operators-z2c9w\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:37 crc kubenswrapper[4904]: I1121 14:07:37.967040 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:38 crc kubenswrapper[4904]: I1121 14:07:38.619642 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z2c9w"] Nov 21 14:07:38 crc kubenswrapper[4904]: W1121 14:07:38.627432 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac06d2e_2d71_4c1d_b0a2_aa51e4a51d6a.slice/crio-3f15fa2b8d4e4087789a243eaaf49b29b060d3b0d3095a615063ebba9281ebc1 WatchSource:0}: Error finding container 3f15fa2b8d4e4087789a243eaaf49b29b060d3b0d3095a615063ebba9281ebc1: Status 404 returned error can't find the container with id 3f15fa2b8d4e4087789a243eaaf49b29b060d3b0d3095a615063ebba9281ebc1 Nov 21 14:07:39 crc kubenswrapper[4904]: I1121 14:07:39.134714 4904 generic.go:334] "Generic (PLEG): container finished" podID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerID="81b24638ad665973d0372ed94d269bf30af5097754eacb2cda7e6a3d4b2e8543" exitCode=0 Nov 21 14:07:39 crc kubenswrapper[4904]: I1121 14:07:39.134775 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerDied","Data":"81b24638ad665973d0372ed94d269bf30af5097754eacb2cda7e6a3d4b2e8543"} Nov 21 14:07:39 crc kubenswrapper[4904]: I1121 14:07:39.134813 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerStarted","Data":"3f15fa2b8d4e4087789a243eaaf49b29b060d3b0d3095a615063ebba9281ebc1"} Nov 21 14:07:40 crc kubenswrapper[4904]: I1121 14:07:40.149441 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerStarted","Data":"9dda1061bba53aa68577a28589477ba44729620423da6f8f7504e79fabe7be10"} Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:42.999980 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-djzs5"] Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.007489 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.015136 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djzs5"] Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.077690 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mb5p"] Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.099200 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5mb5p"] Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.102206 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7785a9-be25-4320-a9ae-eb81a5a620e9-utilities\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.102239 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7785a9-be25-4320-a9ae-eb81a5a620e9-catalog-content\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.102355 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxrc4\" (UniqueName: \"kubernetes.io/projected/ba7785a9-be25-4320-a9ae-eb81a5a620e9-kube-api-access-wxrc4\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.191026 4904 generic.go:334] "Generic (PLEG): container finished" podID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerID="9dda1061bba53aa68577a28589477ba44729620423da6f8f7504e79fabe7be10" exitCode=0 Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.191136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerDied","Data":"9dda1061bba53aa68577a28589477ba44729620423da6f8f7504e79fabe7be10"} Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.206872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7785a9-be25-4320-a9ae-eb81a5a620e9-utilities\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.206960 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7785a9-be25-4320-a9ae-eb81a5a620e9-catalog-content\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.207295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxrc4\" (UniqueName: \"kubernetes.io/projected/ba7785a9-be25-4320-a9ae-eb81a5a620e9-kube-api-access-wxrc4\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.207686 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba7785a9-be25-4320-a9ae-eb81a5a620e9-utilities\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.208028 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba7785a9-be25-4320-a9ae-eb81a5a620e9-catalog-content\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.251687 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxrc4\" (UniqueName: \"kubernetes.io/projected/ba7785a9-be25-4320-a9ae-eb81a5a620e9-kube-api-access-wxrc4\") pod \"community-operators-djzs5\" (UID: \"ba7785a9-be25-4320-a9ae-eb81a5a620e9\") " pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: E1121 14:07:43.339500 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac06d2e_2d71_4c1d_b0a2_aa51e4a51d6a.slice/crio-9dda1061bba53aa68577a28589477ba44729620423da6f8f7504e79fabe7be10.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddac06d2e_2d71_4c1d_b0a2_aa51e4a51d6a.slice/crio-conmon-9dda1061bba53aa68577a28589477ba44729620423da6f8f7504e79fabe7be10.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.353772 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:07:43 crc kubenswrapper[4904]: I1121 14:07:43.946908 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djzs5"] Nov 21 14:07:44 crc kubenswrapper[4904]: I1121 14:07:44.214391 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerStarted","Data":"079bb3befb4bda95c8cf70a71be61394e776748a7abe6f25d85b9780a16e4bf5"} Nov 21 14:07:44 crc kubenswrapper[4904]: I1121 14:07:44.217497 4904 generic.go:334] "Generic (PLEG): container finished" podID="ba7785a9-be25-4320-a9ae-eb81a5a620e9" containerID="9fc305401a82f0cc9e8416d98abd115ba61259632ffcddfcfd46863e0f22848f" exitCode=0 Nov 21 14:07:44 crc kubenswrapper[4904]: I1121 14:07:44.217535 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djzs5" event={"ID":"ba7785a9-be25-4320-a9ae-eb81a5a620e9","Type":"ContainerDied","Data":"9fc305401a82f0cc9e8416d98abd115ba61259632ffcddfcfd46863e0f22848f"} Nov 21 14:07:44 crc kubenswrapper[4904]: I1121 14:07:44.217555 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djzs5" event={"ID":"ba7785a9-be25-4320-a9ae-eb81a5a620e9","Type":"ContainerStarted","Data":"1f57198722f5dce3a4a2404cf1dfec12c9c58db64ed2e58dec225c789a10e149"} Nov 21 14:07:44 crc kubenswrapper[4904]: I1121 14:07:44.249214 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z2c9w" podStartSLOduration=2.548300297 podStartE2EDuration="7.249175129s" podCreationTimestamp="2025-11-21 14:07:37 +0000 UTC" firstStartedPulling="2025-11-21 14:07:39.13686216 +0000 UTC m=+2133.258394702" lastFinishedPulling="2025-11-21 14:07:43.837736982 +0000 UTC m=+2137.959269534" observedRunningTime="2025-11-21 14:07:44.233890595 +0000 UTC m=+2138.355423147" watchObservedRunningTime="2025-11-21 14:07:44.249175129 +0000 UTC m=+2138.370707701" Nov 21 14:07:44 crc kubenswrapper[4904]: I1121 14:07:44.530119 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524ce65b-9914-4643-8132-50ee21805a8c" path="/var/lib/kubelet/pods/524ce65b-9914-4643-8132-50ee21805a8c/volumes" Nov 21 14:07:47 crc kubenswrapper[4904]: I1121 14:07:47.969843 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:47 crc kubenswrapper[4904]: I1121 14:07:47.970316 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:48 crc kubenswrapper[4904]: I1121 14:07:48.032785 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:52 crc kubenswrapper[4904]: I1121 14:07:52.329159 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djzs5" event={"ID":"ba7785a9-be25-4320-a9ae-eb81a5a620e9","Type":"ContainerStarted","Data":"437fb823253fe00249e26ebb43b8111f40caa3421015acd6fd67ddde5b856c56"} Nov 21 14:07:53 crc kubenswrapper[4904]: I1121 14:07:53.343812 4904 generic.go:334] "Generic (PLEG): container finished" podID="ba7785a9-be25-4320-a9ae-eb81a5a620e9" containerID="437fb823253fe00249e26ebb43b8111f40caa3421015acd6fd67ddde5b856c56" exitCode=0 Nov 21 14:07:53 crc kubenswrapper[4904]: I1121 14:07:53.343924 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djzs5" event={"ID":"ba7785a9-be25-4320-a9ae-eb81a5a620e9","Type":"ContainerDied","Data":"437fb823253fe00249e26ebb43b8111f40caa3421015acd6fd67ddde5b856c56"} Nov 21 14:07:54 crc kubenswrapper[4904]: I1121 14:07:54.362298 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-djzs5" event={"ID":"ba7785a9-be25-4320-a9ae-eb81a5a620e9","Type":"ContainerStarted","Data":"1e605b358caae746d6b2dfdfb368ef241af6acdb173e3b7ff30f7333dafb3b63"} Nov 21 14:07:54 crc kubenswrapper[4904]: I1121 14:07:54.386360 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-djzs5" podStartSLOduration=2.665869923 podStartE2EDuration="12.38633122s" podCreationTimestamp="2025-11-21 14:07:42 +0000 UTC" firstStartedPulling="2025-11-21 14:07:44.220129168 +0000 UTC m=+2138.341661720" lastFinishedPulling="2025-11-21 14:07:53.940590475 +0000 UTC m=+2148.062123017" observedRunningTime="2025-11-21 14:07:54.38633132 +0000 UTC m=+2148.507863882" watchObservedRunningTime="2025-11-21 14:07:54.38633122 +0000 UTC m=+2148.507863772" Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.028175 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.051992 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-mx7lv"] Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.067813 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-6c90-account-create-4nfs9"] Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.080129 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-mx7lv"] Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.094786 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-6c90-account-create-4nfs9"] Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.114151 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.114240 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.115714 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2c9w"] Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.403716 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z2c9w" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="registry-server" containerID="cri-o://079bb3befb4bda95c8cf70a71be61394e776748a7abe6f25d85b9780a16e4bf5" gracePeriod=2 Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.529408 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed2edaa-f8f5-47fe-b8c5-94c81c3b6368" path="/var/lib/kubelet/pods/bed2edaa-f8f5-47fe-b8c5-94c81c3b6368/volumes" Nov 21 14:07:58 crc kubenswrapper[4904]: I1121 14:07:58.530109 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe1209b-266b-4875-9356-4592af75e127" path="/var/lib/kubelet/pods/ebe1209b-266b-4875-9356-4592af75e127/volumes" Nov 21 14:08:00 crc kubenswrapper[4904]: I1121 14:08:00.435086 4904 generic.go:334] "Generic (PLEG): container finished" podID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerID="079bb3befb4bda95c8cf70a71be61394e776748a7abe6f25d85b9780a16e4bf5" exitCode=0 Nov 21 14:08:00 crc kubenswrapper[4904]: I1121 14:08:00.435185 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerDied","Data":"079bb3befb4bda95c8cf70a71be61394e776748a7abe6f25d85b9780a16e4bf5"} Nov 21 14:08:00 crc kubenswrapper[4904]: I1121 14:08:00.944950 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.026236 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-utilities\") pod \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.026483 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-catalog-content\") pod \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.026722 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnv5h\" (UniqueName: \"kubernetes.io/projected/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-kube-api-access-lnv5h\") pod \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\" (UID: \"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a\") " Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.028183 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-utilities" (OuterVolumeSpecName: "utilities") pod "dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" (UID: "dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.035888 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-kube-api-access-lnv5h" (OuterVolumeSpecName: "kube-api-access-lnv5h") pod "dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" (UID: "dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a"). InnerVolumeSpecName "kube-api-access-lnv5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.078774 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" (UID: "dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.129989 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.130034 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.130051 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnv5h\" (UniqueName: \"kubernetes.io/projected/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a-kube-api-access-lnv5h\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.448412 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z2c9w" event={"ID":"dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a","Type":"ContainerDied","Data":"3f15fa2b8d4e4087789a243eaaf49b29b060d3b0d3095a615063ebba9281ebc1"} Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.448485 4904 scope.go:117] "RemoveContainer" containerID="079bb3befb4bda95c8cf70a71be61394e776748a7abe6f25d85b9780a16e4bf5" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.448520 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z2c9w" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.477902 4904 scope.go:117] "RemoveContainer" containerID="9dda1061bba53aa68577a28589477ba44729620423da6f8f7504e79fabe7be10" Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.488941 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z2c9w"] Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.502796 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z2c9w"] Nov 21 14:08:01 crc kubenswrapper[4904]: I1121 14:08:01.529022 4904 scope.go:117] "RemoveContainer" containerID="81b24638ad665973d0372ed94d269bf30af5097754eacb2cda7e6a3d4b2e8543" Nov 21 14:08:02 crc kubenswrapper[4904]: I1121 14:08:02.532044 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" path="/var/lib/kubelet/pods/dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a/volumes" Nov 21 14:08:03 crc kubenswrapper[4904]: I1121 14:08:03.354125 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:08:03 crc kubenswrapper[4904]: I1121 14:08:03.354198 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:08:03 crc kubenswrapper[4904]: I1121 14:08:03.407746 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:08:03 crc kubenswrapper[4904]: I1121 14:08:03.525155 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-djzs5" Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.028731 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-djzs5"] Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.186027 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77hfr"] Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.186442 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77hfr" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="registry-server" containerID="cri-o://20517aae05fcf42d36ba3b9c7996cce9a7f498dbe23d6173f073e179c1d02da2" gracePeriod=2 Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.513252 4904 generic.go:334] "Generic (PLEG): container finished" podID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerID="20517aae05fcf42d36ba3b9c7996cce9a7f498dbe23d6173f073e179c1d02da2" exitCode=0 Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.536147 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerDied","Data":"20517aae05fcf42d36ba3b9c7996cce9a7f498dbe23d6173f073e179c1d02da2"} Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.858553 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77hfr" Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.946638 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw22t\" (UniqueName: \"kubernetes.io/projected/13c247bc-336f-4ad1-ad29-e860a1f22730-kube-api-access-zw22t\") pod \"13c247bc-336f-4ad1-ad29-e860a1f22730\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.946873 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-utilities\") pod \"13c247bc-336f-4ad1-ad29-e860a1f22730\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.947160 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-catalog-content\") pod \"13c247bc-336f-4ad1-ad29-e860a1f22730\" (UID: \"13c247bc-336f-4ad1-ad29-e860a1f22730\") " Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.964996 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-utilities" (OuterVolumeSpecName: "utilities") pod "13c247bc-336f-4ad1-ad29-e860a1f22730" (UID: "13c247bc-336f-4ad1-ad29-e860a1f22730"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:08:04 crc kubenswrapper[4904]: I1121 14:08:04.985240 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c247bc-336f-4ad1-ad29-e860a1f22730-kube-api-access-zw22t" (OuterVolumeSpecName: "kube-api-access-zw22t") pod "13c247bc-336f-4ad1-ad29-e860a1f22730" (UID: "13c247bc-336f-4ad1-ad29-e860a1f22730"). InnerVolumeSpecName "kube-api-access-zw22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.051176 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw22t\" (UniqueName: \"kubernetes.io/projected/13c247bc-336f-4ad1-ad29-e860a1f22730-kube-api-access-zw22t\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.051219 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.072291 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13c247bc-336f-4ad1-ad29-e860a1f22730" (UID: "13c247bc-336f-4ad1-ad29-e860a1f22730"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.153021 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c247bc-336f-4ad1-ad29-e860a1f22730-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.528207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77hfr" event={"ID":"13c247bc-336f-4ad1-ad29-e860a1f22730","Type":"ContainerDied","Data":"4cb707cf4471cd0449564eff3904506a8012582e6daf9e4651df470b54f6f322"} Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.528300 4904 scope.go:117] "RemoveContainer" containerID="20517aae05fcf42d36ba3b9c7996cce9a7f498dbe23d6173f073e179c1d02da2" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.528308 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77hfr" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.530346 4904 generic.go:334] "Generic (PLEG): container finished" podID="c093339a-118d-434d-a647-fff82c7882d2" containerID="307f342d267b110f16c6a9854233bd486456514c2fea0c36e6f514bb39a89da8" exitCode=0 Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.530430 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" event={"ID":"c093339a-118d-434d-a647-fff82c7882d2","Type":"ContainerDied","Data":"307f342d267b110f16c6a9854233bd486456514c2fea0c36e6f514bb39a89da8"} Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.559408 4904 scope.go:117] "RemoveContainer" containerID="a1eba7148395120f4c569645c976e716fe316f79a0ee7855de1ebc8deca178a1" Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.598900 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77hfr"] Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.609339 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77hfr"] Nov 21 14:08:05 crc kubenswrapper[4904]: I1121 14:08:05.620941 4904 scope.go:117] "RemoveContainer" containerID="904e394c9f254c40ed303b7f35fa748c2f9e6beaef7f008dca78aa7e40a931fd" Nov 21 14:08:06 crc kubenswrapper[4904]: I1121 14:08:06.527955 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" path="/var/lib/kubelet/pods/13c247bc-336f-4ad1-ad29-e860a1f22730/volumes" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.092038 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.209982 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhkwl\" (UniqueName: \"kubernetes.io/projected/c093339a-118d-434d-a647-fff82c7882d2-kube-api-access-lhkwl\") pod \"c093339a-118d-434d-a647-fff82c7882d2\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.210086 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-inventory\") pod \"c093339a-118d-434d-a647-fff82c7882d2\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.210134 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-ssh-key\") pod \"c093339a-118d-434d-a647-fff82c7882d2\" (UID: \"c093339a-118d-434d-a647-fff82c7882d2\") " Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.219946 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c093339a-118d-434d-a647-fff82c7882d2-kube-api-access-lhkwl" (OuterVolumeSpecName: "kube-api-access-lhkwl") pod "c093339a-118d-434d-a647-fff82c7882d2" (UID: "c093339a-118d-434d-a647-fff82c7882d2"). InnerVolumeSpecName "kube-api-access-lhkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.249137 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-inventory" (OuterVolumeSpecName: "inventory") pod "c093339a-118d-434d-a647-fff82c7882d2" (UID: "c093339a-118d-434d-a647-fff82c7882d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.251436 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c093339a-118d-434d-a647-fff82c7882d2" (UID: "c093339a-118d-434d-a647-fff82c7882d2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.313477 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhkwl\" (UniqueName: \"kubernetes.io/projected/c093339a-118d-434d-a647-fff82c7882d2-kube-api-access-lhkwl\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.313534 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.313545 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c093339a-118d-434d-a647-fff82c7882d2-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.583128 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" event={"ID":"c093339a-118d-434d-a647-fff82c7882d2","Type":"ContainerDied","Data":"af8559bd9f99322c5bf991b9f9a622c376fdc260d19a704f3f71d785ae2a073d"} Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.583566 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8559bd9f99322c5bf991b9f9a622c376fdc260d19a704f3f71d785ae2a073d" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.583639 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.672633 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd"] Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673342 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="extract-utilities" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673365 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="extract-utilities" Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673413 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="extract-utilities" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673426 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="extract-utilities" Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673448 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="extract-content" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673457 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="extract-content" Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673467 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="registry-server" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673476 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="registry-server" Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673497 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="registry-server" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673504 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="registry-server" Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673515 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="extract-content" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673523 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="extract-content" Nov 21 14:08:07 crc kubenswrapper[4904]: E1121 14:08:07.673540 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c093339a-118d-434d-a647-fff82c7882d2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673549 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c093339a-118d-434d-a647-fff82c7882d2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673891 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac06d2e-2d71-4c1d-b0a2-aa51e4a51d6a" containerName="registry-server" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673911 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c247bc-336f-4ad1-ad29-e860a1f22730" containerName="registry-server" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.673936 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c093339a-118d-434d-a647-fff82c7882d2" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.675110 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.679732 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.679970 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.680176 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.680315 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.738925 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.739072 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.739104 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcddg\" (UniqueName: \"kubernetes.io/projected/70657906-3734-496b-b268-5b0ebbb6d6de-kube-api-access-fcddg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.769600 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd"] Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.842081 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.842198 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.842227 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcddg\" (UniqueName: \"kubernetes.io/projected/70657906-3734-496b-b268-5b0ebbb6d6de-kube-api-access-fcddg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.849167 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.849580 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:07 crc kubenswrapper[4904]: I1121 14:08:07.863919 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcddg\" (UniqueName: \"kubernetes.io/projected/70657906-3734-496b-b268-5b0ebbb6d6de-kube-api-access-fcddg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:08 crc kubenswrapper[4904]: I1121 14:08:08.084636 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:08 crc kubenswrapper[4904]: I1121 14:08:08.648137 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd"] Nov 21 14:08:09 crc kubenswrapper[4904]: I1121 14:08:09.610542 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" event={"ID":"70657906-3734-496b-b268-5b0ebbb6d6de","Type":"ContainerStarted","Data":"8506a9659f382c37a0eae4d467d5b68098a5f740d321dd44f68986e7298a06c6"} Nov 21 14:08:09 crc kubenswrapper[4904]: I1121 14:08:09.611051 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" event={"ID":"70657906-3734-496b-b268-5b0ebbb6d6de","Type":"ContainerStarted","Data":"b4cdaba14605bca9c8937b537c7c5d2dc69ddcec105b35db8c2bde370cfce97a"} Nov 21 14:08:09 crc kubenswrapper[4904]: I1121 14:08:09.635277 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" podStartSLOduration=2.080167539 podStartE2EDuration="2.635254739s" podCreationTimestamp="2025-11-21 14:08:07 +0000 UTC" firstStartedPulling="2025-11-21 14:08:08.684144609 +0000 UTC m=+2162.805677161" lastFinishedPulling="2025-11-21 14:08:09.239231809 +0000 UTC m=+2163.360764361" observedRunningTime="2025-11-21 14:08:09.632326397 +0000 UTC m=+2163.753858949" watchObservedRunningTime="2025-11-21 14:08:09.635254739 +0000 UTC m=+2163.756787291" Nov 21 14:08:14 crc kubenswrapper[4904]: E1121 14:08:14.278272 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70657906_3734_496b_b268_5b0ebbb6d6de.slice/crio-conmon-8506a9659f382c37a0eae4d467d5b68098a5f740d321dd44f68986e7298a06c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70657906_3734_496b_b268_5b0ebbb6d6de.slice/crio-8506a9659f382c37a0eae4d467d5b68098a5f740d321dd44f68986e7298a06c6.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:08:14 crc kubenswrapper[4904]: I1121 14:08:14.680159 4904 generic.go:334] "Generic (PLEG): container finished" podID="70657906-3734-496b-b268-5b0ebbb6d6de" containerID="8506a9659f382c37a0eae4d467d5b68098a5f740d321dd44f68986e7298a06c6" exitCode=0 Nov 21 14:08:14 crc kubenswrapper[4904]: I1121 14:08:14.680246 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" event={"ID":"70657906-3734-496b-b268-5b0ebbb6d6de","Type":"ContainerDied","Data":"8506a9659f382c37a0eae4d467d5b68098a5f740d321dd44f68986e7298a06c6"} Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.229946 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.377118 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-ssh-key\") pod \"70657906-3734-496b-b268-5b0ebbb6d6de\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.377656 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcddg\" (UniqueName: \"kubernetes.io/projected/70657906-3734-496b-b268-5b0ebbb6d6de-kube-api-access-fcddg\") pod \"70657906-3734-496b-b268-5b0ebbb6d6de\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.378143 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-inventory\") pod \"70657906-3734-496b-b268-5b0ebbb6d6de\" (UID: \"70657906-3734-496b-b268-5b0ebbb6d6de\") " Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.385587 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70657906-3734-496b-b268-5b0ebbb6d6de-kube-api-access-fcddg" (OuterVolumeSpecName: "kube-api-access-fcddg") pod "70657906-3734-496b-b268-5b0ebbb6d6de" (UID: "70657906-3734-496b-b268-5b0ebbb6d6de"). InnerVolumeSpecName "kube-api-access-fcddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.416558 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-inventory" (OuterVolumeSpecName: "inventory") pod "70657906-3734-496b-b268-5b0ebbb6d6de" (UID: "70657906-3734-496b-b268-5b0ebbb6d6de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.419824 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "70657906-3734-496b-b268-5b0ebbb6d6de" (UID: "70657906-3734-496b-b268-5b0ebbb6d6de"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.481143 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.481194 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/70657906-3734-496b-b268-5b0ebbb6d6de-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.481207 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcddg\" (UniqueName: \"kubernetes.io/projected/70657906-3734-496b-b268-5b0ebbb6d6de-kube-api-access-fcddg\") on node \"crc\" DevicePath \"\"" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.707601 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" event={"ID":"70657906-3734-496b-b268-5b0ebbb6d6de","Type":"ContainerDied","Data":"b4cdaba14605bca9c8937b537c7c5d2dc69ddcec105b35db8c2bde370cfce97a"} Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.707691 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4cdaba14605bca9c8937b537c7c5d2dc69ddcec105b35db8c2bde370cfce97a" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.707717 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.791424 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv"] Nov 21 14:08:16 crc kubenswrapper[4904]: E1121 14:08:16.792065 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70657906-3734-496b-b268-5b0ebbb6d6de" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.792089 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="70657906-3734-496b-b268-5b0ebbb6d6de" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.792332 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="70657906-3734-496b-b268-5b0ebbb6d6de" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.793450 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.797432 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.797606 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.797794 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.797919 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.823605 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv"] Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.892903 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.892977 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnzd\" (UniqueName: \"kubernetes.io/projected/3782dfd0-142f-4927-92fd-372372e50541-kube-api-access-dlnzd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.893471 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.995830 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.995906 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlnzd\" (UniqueName: \"kubernetes.io/projected/3782dfd0-142f-4927-92fd-372372e50541-kube-api-access-dlnzd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:16 crc kubenswrapper[4904]: I1121 14:08:16.996084 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:17 crc kubenswrapper[4904]: I1121 14:08:17.001231 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:17 crc kubenswrapper[4904]: I1121 14:08:17.005182 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:17 crc kubenswrapper[4904]: I1121 14:08:17.019514 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlnzd\" (UniqueName: \"kubernetes.io/projected/3782dfd0-142f-4927-92fd-372372e50541-kube-api-access-dlnzd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:17 crc kubenswrapper[4904]: I1121 14:08:17.133303 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:08:17 crc kubenswrapper[4904]: I1121 14:08:17.759952 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv"] Nov 21 14:08:18 crc kubenswrapper[4904]: I1121 14:08:18.763604 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" event={"ID":"3782dfd0-142f-4927-92fd-372372e50541","Type":"ContainerStarted","Data":"c9fb3fee4864cac6bc09705fb926ce9725efb86f5970c651b1f075609f80f0e5"} Nov 21 14:08:18 crc kubenswrapper[4904]: I1121 14:08:18.764072 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" event={"ID":"3782dfd0-142f-4927-92fd-372372e50541","Type":"ContainerStarted","Data":"d1ad24a9f987d87433caa333621752ef10fa167fd06cb6d35cdd6f6b7ca4c0ea"} Nov 21 14:08:18 crc kubenswrapper[4904]: I1121 14:08:18.793736 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" podStartSLOduration=2.392469907 podStartE2EDuration="2.793712435s" podCreationTimestamp="2025-11-21 14:08:16 +0000 UTC" firstStartedPulling="2025-11-21 14:08:17.771797742 +0000 UTC m=+2171.893330294" lastFinishedPulling="2025-11-21 14:08:18.17304025 +0000 UTC m=+2172.294572822" observedRunningTime="2025-11-21 14:08:18.787110273 +0000 UTC m=+2172.908642845" watchObservedRunningTime="2025-11-21 14:08:18.793712435 +0000 UTC m=+2172.915244987" Nov 21 14:08:24 crc kubenswrapper[4904]: I1121 14:08:24.054045 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqn46"] Nov 21 14:08:24 crc kubenswrapper[4904]: I1121 14:08:24.066112 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smj2v"] Nov 21 14:08:24 crc kubenswrapper[4904]: I1121 14:08:24.074611 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qqn46"] Nov 21 14:08:24 crc kubenswrapper[4904]: I1121 14:08:24.083512 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-smj2v"] Nov 21 14:08:24 crc kubenswrapper[4904]: I1121 14:08:24.527819 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648982b6-6d9e-4aa1-9ec1-6191871129bc" path="/var/lib/kubelet/pods/648982b6-6d9e-4aa1-9ec1-6191871129bc/volumes" Nov 21 14:08:24 crc kubenswrapper[4904]: I1121 14:08:24.528613 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f179df31-95c5-4ae1-8a64-60caefab9aea" path="/var/lib/kubelet/pods/f179df31-95c5-4ae1-8a64-60caefab9aea/volumes" Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.107161 4904 scope.go:117] "RemoveContainer" containerID="e0df5bbca88a8967f9ccb8cc22faf16dab30cdc15c0941c9d0f7584d52e09762" Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.113783 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.113870 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.141528 4904 scope.go:117] "RemoveContainer" containerID="256819aac8d815483e2d74177d76961c3c1dcd047f989b09384a5a7152a7033e" Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.222395 4904 scope.go:117] "RemoveContainer" containerID="a2a53d8e3514da576bfa67410dc7dfed5e47f8fff861832ce4fa0b89d226cf9a" Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.308026 4904 scope.go:117] "RemoveContainer" containerID="bdac76f4d14d940974600f0038cd1de25e0dbd3c43b508a14b768c268586f2c6" Nov 21 14:08:28 crc kubenswrapper[4904]: I1121 14:08:28.369051 4904 scope.go:117] "RemoveContainer" containerID="bc611156b6ae74e4b66bae9902dc7fd4f5bee5207383c65327f9866971fb5434" Nov 21 14:08:58 crc kubenswrapper[4904]: I1121 14:08:58.113982 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:08:58 crc kubenswrapper[4904]: I1121 14:08:58.114783 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:08:58 crc kubenswrapper[4904]: I1121 14:08:58.114846 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:08:58 crc kubenswrapper[4904]: I1121 14:08:58.116086 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:08:58 crc kubenswrapper[4904]: I1121 14:08:58.116165 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" gracePeriod=600 Nov 21 14:08:58 crc kubenswrapper[4904]: E1121 14:08:58.251627 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:08:59 crc kubenswrapper[4904]: I1121 14:08:59.236829 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" exitCode=0 Nov 21 14:08:59 crc kubenswrapper[4904]: I1121 14:08:59.236927 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b"} Nov 21 14:08:59 crc kubenswrapper[4904]: I1121 14:08:59.237305 4904 scope.go:117] "RemoveContainer" containerID="c440976231c18075c6b34421dc688eb2b29ec89c3d96f4545c581aa060fb19c0" Nov 21 14:08:59 crc kubenswrapper[4904]: I1121 14:08:59.238440 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:08:59 crc kubenswrapper[4904]: E1121 14:08:59.238801 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:09:07 crc kubenswrapper[4904]: I1121 14:09:07.072041 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8zmt"] Nov 21 14:09:07 crc kubenswrapper[4904]: I1121 14:09:07.084834 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-p8zmt"] Nov 21 14:09:08 crc kubenswrapper[4904]: I1121 14:09:08.529645 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e2ed93-b0dc-4698-a3e7-415f090e9ab2" path="/var/lib/kubelet/pods/38e2ed93-b0dc-4698-a3e7-415f090e9ab2/volumes" Nov 21 14:09:13 crc kubenswrapper[4904]: I1121 14:09:13.513912 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:09:13 crc kubenswrapper[4904]: E1121 14:09:13.515037 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:09:16 crc kubenswrapper[4904]: I1121 14:09:16.444344 4904 generic.go:334] "Generic (PLEG): container finished" podID="3782dfd0-142f-4927-92fd-372372e50541" containerID="c9fb3fee4864cac6bc09705fb926ce9725efb86f5970c651b1f075609f80f0e5" exitCode=0 Nov 21 14:09:16 crc kubenswrapper[4904]: I1121 14:09:16.444419 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" event={"ID":"3782dfd0-142f-4927-92fd-372372e50541","Type":"ContainerDied","Data":"c9fb3fee4864cac6bc09705fb926ce9725efb86f5970c651b1f075609f80f0e5"} Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.129801 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.279124 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-ssh-key\") pod \"3782dfd0-142f-4927-92fd-372372e50541\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.279260 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlnzd\" (UniqueName: \"kubernetes.io/projected/3782dfd0-142f-4927-92fd-372372e50541-kube-api-access-dlnzd\") pod \"3782dfd0-142f-4927-92fd-372372e50541\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.279361 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-inventory\") pod \"3782dfd0-142f-4927-92fd-372372e50541\" (UID: \"3782dfd0-142f-4927-92fd-372372e50541\") " Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.288308 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3782dfd0-142f-4927-92fd-372372e50541-kube-api-access-dlnzd" (OuterVolumeSpecName: "kube-api-access-dlnzd") pod "3782dfd0-142f-4927-92fd-372372e50541" (UID: "3782dfd0-142f-4927-92fd-372372e50541"). InnerVolumeSpecName "kube-api-access-dlnzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.326867 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3782dfd0-142f-4927-92fd-372372e50541" (UID: "3782dfd0-142f-4927-92fd-372372e50541"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.327187 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-inventory" (OuterVolumeSpecName: "inventory") pod "3782dfd0-142f-4927-92fd-372372e50541" (UID: "3782dfd0-142f-4927-92fd-372372e50541"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.382076 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.382122 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlnzd\" (UniqueName: \"kubernetes.io/projected/3782dfd0-142f-4927-92fd-372372e50541-kube-api-access-dlnzd\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.382139 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3782dfd0-142f-4927-92fd-372372e50541-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.503333 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" event={"ID":"3782dfd0-142f-4927-92fd-372372e50541","Type":"ContainerDied","Data":"d1ad24a9f987d87433caa333621752ef10fa167fd06cb6d35cdd6f6b7ca4c0ea"} Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.503388 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1ad24a9f987d87433caa333621752ef10fa167fd06cb6d35cdd6f6b7ca4c0ea" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.503441 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.591893 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-x9spq"] Nov 21 14:09:18 crc kubenswrapper[4904]: E1121 14:09:18.592856 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3782dfd0-142f-4927-92fd-372372e50541" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.592953 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3782dfd0-142f-4927-92fd-372372e50541" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.593255 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3782dfd0-142f-4927-92fd-372372e50541" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.594408 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.597251 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.612571 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.612972 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.613118 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.665752 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-x9spq"] Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.691932 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.692128 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.692175 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffkfq\" (UniqueName: \"kubernetes.io/projected/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-kube-api-access-ffkfq\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.794514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.795170 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffkfq\" (UniqueName: \"kubernetes.io/projected/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-kube-api-access-ffkfq\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.795259 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.802123 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.805316 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.819630 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffkfq\" (UniqueName: \"kubernetes.io/projected/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-kube-api-access-ffkfq\") pod \"ssh-known-hosts-edpm-deployment-x9spq\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:18 crc kubenswrapper[4904]: I1121 14:09:18.915968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:19 crc kubenswrapper[4904]: I1121 14:09:19.505789 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-x9spq"] Nov 21 14:09:19 crc kubenswrapper[4904]: I1121 14:09:19.518952 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:09:20 crc kubenswrapper[4904]: I1121 14:09:20.530592 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" event={"ID":"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec","Type":"ContainerStarted","Data":"6647fb1070de1ed8bdb82698e88bf6b20bac0499651b1b0c03f996a102584817"} Nov 21 14:09:20 crc kubenswrapper[4904]: I1121 14:09:20.531174 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" event={"ID":"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec","Type":"ContainerStarted","Data":"ddd27bc98381180d4b9a3188c9109fb53d0dede825e75fb92c3ed08e72550908"} Nov 21 14:09:21 crc kubenswrapper[4904]: I1121 14:09:21.581371 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" podStartSLOduration=2.918712466 podStartE2EDuration="3.581339288s" podCreationTimestamp="2025-11-21 14:09:18 +0000 UTC" firstStartedPulling="2025-11-21 14:09:19.518698512 +0000 UTC m=+2233.640231064" lastFinishedPulling="2025-11-21 14:09:20.181325324 +0000 UTC m=+2234.302857886" observedRunningTime="2025-11-21 14:09:21.564802033 +0000 UTC m=+2235.686334605" watchObservedRunningTime="2025-11-21 14:09:21.581339288 +0000 UTC m=+2235.702871850" Nov 21 14:09:26 crc kubenswrapper[4904]: I1121 14:09:26.522006 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:09:26 crc kubenswrapper[4904]: E1121 14:09:26.522984 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:09:28 crc kubenswrapper[4904]: I1121 14:09:28.622269 4904 generic.go:334] "Generic (PLEG): container finished" podID="ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" containerID="6647fb1070de1ed8bdb82698e88bf6b20bac0499651b1b0c03f996a102584817" exitCode=0 Nov 21 14:09:28 crc kubenswrapper[4904]: I1121 14:09:28.622349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" event={"ID":"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec","Type":"ContainerDied","Data":"6647fb1070de1ed8bdb82698e88bf6b20bac0499651b1b0c03f996a102584817"} Nov 21 14:09:28 crc kubenswrapper[4904]: I1121 14:09:28.637242 4904 scope.go:117] "RemoveContainer" containerID="ab7380cc299ba5249a2f3e93358da7fc8cf113dab4b1425381172f7f802ff5e8" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.197315 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.301091 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffkfq\" (UniqueName: \"kubernetes.io/projected/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-kube-api-access-ffkfq\") pod \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.301372 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-inventory-0\") pod \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.301484 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-ssh-key-openstack-edpm-ipam\") pod \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\" (UID: \"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec\") " Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.316146 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-kube-api-access-ffkfq" (OuterVolumeSpecName: "kube-api-access-ffkfq") pod "ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" (UID: "ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec"). InnerVolumeSpecName "kube-api-access-ffkfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.337231 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" (UID: "ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.341210 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" (UID: "ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.403845 4904 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.403886 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.403898 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffkfq\" (UniqueName: \"kubernetes.io/projected/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec-kube-api-access-ffkfq\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.648646 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" event={"ID":"ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec","Type":"ContainerDied","Data":"ddd27bc98381180d4b9a3188c9109fb53d0dede825e75fb92c3ed08e72550908"} Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.648717 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-x9spq" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.648722 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddd27bc98381180d4b9a3188c9109fb53d0dede825e75fb92c3ed08e72550908" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.814489 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq"] Nov 21 14:09:30 crc kubenswrapper[4904]: E1121 14:09:30.815298 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.815318 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.815611 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.816851 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.819266 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.819489 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.819751 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.819966 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.844779 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq"] Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.919010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.919294 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:30 crc kubenswrapper[4904]: I1121 14:09:30.919794 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j56s8\" (UniqueName: \"kubernetes.io/projected/f2c6830d-5463-465e-9d1e-eb695b465dbd-kube-api-access-j56s8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.022052 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j56s8\" (UniqueName: \"kubernetes.io/projected/f2c6830d-5463-465e-9d1e-eb695b465dbd-kube-api-access-j56s8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.022349 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.022423 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.030569 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.032641 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.044529 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j56s8\" (UniqueName: \"kubernetes.io/projected/f2c6830d-5463-465e-9d1e-eb695b465dbd-kube-api-access-j56s8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qszbq\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.143484 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:31 crc kubenswrapper[4904]: I1121 14:09:31.716707 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq"] Nov 21 14:09:32 crc kubenswrapper[4904]: I1121 14:09:32.671007 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" event={"ID":"f2c6830d-5463-465e-9d1e-eb695b465dbd","Type":"ContainerStarted","Data":"4cb81ca73bbd8dede80d5595113dab8665a66ebaab58485cee603855168418c5"} Nov 21 14:09:33 crc kubenswrapper[4904]: I1121 14:09:33.688822 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" event={"ID":"f2c6830d-5463-465e-9d1e-eb695b465dbd","Type":"ContainerStarted","Data":"bdde45774623c37671a3f404c9acd919211067df2189a485ae45ec6365d79618"} Nov 21 14:09:33 crc kubenswrapper[4904]: I1121 14:09:33.715149 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" podStartSLOduration=3.022885213 podStartE2EDuration="3.7151264s" podCreationTimestamp="2025-11-21 14:09:30 +0000 UTC" firstStartedPulling="2025-11-21 14:09:31.72181 +0000 UTC m=+2245.843342552" lastFinishedPulling="2025-11-21 14:09:32.414051187 +0000 UTC m=+2246.535583739" observedRunningTime="2025-11-21 14:09:33.710134337 +0000 UTC m=+2247.831666889" watchObservedRunningTime="2025-11-21 14:09:33.7151264 +0000 UTC m=+2247.836658952" Nov 21 14:09:40 crc kubenswrapper[4904]: I1121 14:09:40.515284 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:09:40 crc kubenswrapper[4904]: E1121 14:09:40.516309 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:09:41 crc kubenswrapper[4904]: I1121 14:09:41.772040 4904 generic.go:334] "Generic (PLEG): container finished" podID="f2c6830d-5463-465e-9d1e-eb695b465dbd" containerID="bdde45774623c37671a3f404c9acd919211067df2189a485ae45ec6365d79618" exitCode=0 Nov 21 14:09:41 crc kubenswrapper[4904]: I1121 14:09:41.772140 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" event={"ID":"f2c6830d-5463-465e-9d1e-eb695b465dbd","Type":"ContainerDied","Data":"bdde45774623c37671a3f404c9acd919211067df2189a485ae45ec6365d79618"} Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.269292 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.358979 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-inventory\") pod \"f2c6830d-5463-465e-9d1e-eb695b465dbd\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.359469 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j56s8\" (UniqueName: \"kubernetes.io/projected/f2c6830d-5463-465e-9d1e-eb695b465dbd-kube-api-access-j56s8\") pod \"f2c6830d-5463-465e-9d1e-eb695b465dbd\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.359615 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-ssh-key\") pod \"f2c6830d-5463-465e-9d1e-eb695b465dbd\" (UID: \"f2c6830d-5463-465e-9d1e-eb695b465dbd\") " Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.372984 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c6830d-5463-465e-9d1e-eb695b465dbd-kube-api-access-j56s8" (OuterVolumeSpecName: "kube-api-access-j56s8") pod "f2c6830d-5463-465e-9d1e-eb695b465dbd" (UID: "f2c6830d-5463-465e-9d1e-eb695b465dbd"). InnerVolumeSpecName "kube-api-access-j56s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.391546 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-inventory" (OuterVolumeSpecName: "inventory") pod "f2c6830d-5463-465e-9d1e-eb695b465dbd" (UID: "f2c6830d-5463-465e-9d1e-eb695b465dbd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.403555 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f2c6830d-5463-465e-9d1e-eb695b465dbd" (UID: "f2c6830d-5463-465e-9d1e-eb695b465dbd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.462853 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.462907 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j56s8\" (UniqueName: \"kubernetes.io/projected/f2c6830d-5463-465e-9d1e-eb695b465dbd-kube-api-access-j56s8\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.462923 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f2c6830d-5463-465e-9d1e-eb695b465dbd-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.803934 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" event={"ID":"f2c6830d-5463-465e-9d1e-eb695b465dbd","Type":"ContainerDied","Data":"4cb81ca73bbd8dede80d5595113dab8665a66ebaab58485cee603855168418c5"} Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.804026 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb81ca73bbd8dede80d5595113dab8665a66ebaab58485cee603855168418c5" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.804098 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.887712 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p"] Nov 21 14:09:43 crc kubenswrapper[4904]: E1121 14:09:43.888546 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c6830d-5463-465e-9d1e-eb695b465dbd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.888576 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c6830d-5463-465e-9d1e-eb695b465dbd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.888877 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c6830d-5463-465e-9d1e-eb695b465dbd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.890048 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.893083 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.893504 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.893901 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.894313 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.938202 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p"] Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.978075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnl4k\" (UniqueName: \"kubernetes.io/projected/193fa023-e769-49e3-8936-f7454a7ccaca-kube-api-access-tnl4k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.978991 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:43 crc kubenswrapper[4904]: I1121 14:09:43.979218 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.082518 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.082612 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.082918 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnl4k\" (UniqueName: \"kubernetes.io/projected/193fa023-e769-49e3-8936-f7454a7ccaca-kube-api-access-tnl4k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.088575 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.089411 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.101050 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnl4k\" (UniqueName: \"kubernetes.io/projected/193fa023-e769-49e3-8936-f7454a7ccaca-kube-api-access-tnl4k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.213690 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:44 crc kubenswrapper[4904]: I1121 14:09:44.808642 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p"] Nov 21 14:09:45 crc kubenswrapper[4904]: I1121 14:09:45.829133 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" event={"ID":"193fa023-e769-49e3-8936-f7454a7ccaca","Type":"ContainerStarted","Data":"05b61bac72fd287ff5340bc645cbbcf47ad31208bfee021304b6536b251712d1"} Nov 21 14:09:45 crc kubenswrapper[4904]: I1121 14:09:45.829577 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" event={"ID":"193fa023-e769-49e3-8936-f7454a7ccaca","Type":"ContainerStarted","Data":"83bd809cf6eb2319a77f672362a0f2b35e315a72e945108f0899b46b4f5e316f"} Nov 21 14:09:45 crc kubenswrapper[4904]: I1121 14:09:45.861868 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" podStartSLOduration=2.237558094 podStartE2EDuration="2.861838118s" podCreationTimestamp="2025-11-21 14:09:43 +0000 UTC" firstStartedPulling="2025-11-21 14:09:44.828633509 +0000 UTC m=+2258.950166061" lastFinishedPulling="2025-11-21 14:09:45.452913543 +0000 UTC m=+2259.574446085" observedRunningTime="2025-11-21 14:09:45.849399584 +0000 UTC m=+2259.970932146" watchObservedRunningTime="2025-11-21 14:09:45.861838118 +0000 UTC m=+2259.983370670" Nov 21 14:09:51 crc kubenswrapper[4904]: I1121 14:09:51.514337 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:09:51 crc kubenswrapper[4904]: E1121 14:09:51.515462 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:09:55 crc kubenswrapper[4904]: I1121 14:09:55.963421 4904 generic.go:334] "Generic (PLEG): container finished" podID="193fa023-e769-49e3-8936-f7454a7ccaca" containerID="05b61bac72fd287ff5340bc645cbbcf47ad31208bfee021304b6536b251712d1" exitCode=0 Nov 21 14:09:55 crc kubenswrapper[4904]: I1121 14:09:55.963505 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" event={"ID":"193fa023-e769-49e3-8936-f7454a7ccaca","Type":"ContainerDied","Data":"05b61bac72fd287ff5340bc645cbbcf47ad31208bfee021304b6536b251712d1"} Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.544419 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.669725 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-inventory\") pod \"193fa023-e769-49e3-8936-f7454a7ccaca\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.669949 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnl4k\" (UniqueName: \"kubernetes.io/projected/193fa023-e769-49e3-8936-f7454a7ccaca-kube-api-access-tnl4k\") pod \"193fa023-e769-49e3-8936-f7454a7ccaca\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.669988 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-ssh-key\") pod \"193fa023-e769-49e3-8936-f7454a7ccaca\" (UID: \"193fa023-e769-49e3-8936-f7454a7ccaca\") " Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.677598 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193fa023-e769-49e3-8936-f7454a7ccaca-kube-api-access-tnl4k" (OuterVolumeSpecName: "kube-api-access-tnl4k") pod "193fa023-e769-49e3-8936-f7454a7ccaca" (UID: "193fa023-e769-49e3-8936-f7454a7ccaca"). InnerVolumeSpecName "kube-api-access-tnl4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.712890 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "193fa023-e769-49e3-8936-f7454a7ccaca" (UID: "193fa023-e769-49e3-8936-f7454a7ccaca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.717490 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-inventory" (OuterVolumeSpecName: "inventory") pod "193fa023-e769-49e3-8936-f7454a7ccaca" (UID: "193fa023-e769-49e3-8936-f7454a7ccaca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.773140 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.773191 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnl4k\" (UniqueName: \"kubernetes.io/projected/193fa023-e769-49e3-8936-f7454a7ccaca-kube-api-access-tnl4k\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.773204 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/193fa023-e769-49e3-8936-f7454a7ccaca-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.996409 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" event={"ID":"193fa023-e769-49e3-8936-f7454a7ccaca","Type":"ContainerDied","Data":"83bd809cf6eb2319a77f672362a0f2b35e315a72e945108f0899b46b4f5e316f"} Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.997003 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83bd809cf6eb2319a77f672362a0f2b35e315a72e945108f0899b46b4f5e316f" Nov 21 14:09:57 crc kubenswrapper[4904]: I1121 14:09:57.996601 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.112337 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm"] Nov 21 14:09:58 crc kubenswrapper[4904]: E1121 14:09:58.113802 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193fa023-e769-49e3-8936-f7454a7ccaca" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.113836 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="193fa023-e769-49e3-8936-f7454a7ccaca" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.114411 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="193fa023-e769-49e3-8936-f7454a7ccaca" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.115624 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.120467 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.120675 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.121104 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.121402 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.121631 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.121722 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.121861 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.121987 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.131742 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm"] Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.289469 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.289573 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.289649 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.290007 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.290113 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.290407 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.290597 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.290847 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.291013 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.291177 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkrtv\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-kube-api-access-kkrtv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.291305 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.291357 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.291509 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394492 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394620 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394722 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394830 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394884 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394929 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.394997 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.395032 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.395061 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkrtv\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-kube-api-access-kkrtv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.395102 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.395130 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.395158 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.401584 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.401666 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.401702 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.401920 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.402535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.402838 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.403134 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.403454 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.405327 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.405372 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.406329 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.412697 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.416320 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkrtv\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-kube-api-access-kkrtv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-tfppm\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:58 crc kubenswrapper[4904]: I1121 14:09:58.450090 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:09:59 crc kubenswrapper[4904]: I1121 14:09:59.083907 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm"] Nov 21 14:10:00 crc kubenswrapper[4904]: I1121 14:10:00.025900 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" event={"ID":"e930ad7a-dc96-49b3-9d1b-2fe2fca33545","Type":"ContainerStarted","Data":"c3ff4f3f3f834ba5339446318848db972fc6d25c45fbc2a07ff406aabe913f00"} Nov 21 14:10:00 crc kubenswrapper[4904]: I1121 14:10:00.026739 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" event={"ID":"e930ad7a-dc96-49b3-9d1b-2fe2fca33545","Type":"ContainerStarted","Data":"621b482986bc74d5e2b647143dedaa8084f76856566b99ac78062c6f138c3284"} Nov 21 14:10:00 crc kubenswrapper[4904]: I1121 14:10:00.055188 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" podStartSLOduration=1.5892610010000001 podStartE2EDuration="2.05515905s" podCreationTimestamp="2025-11-21 14:09:58 +0000 UTC" firstStartedPulling="2025-11-21 14:09:59.094683051 +0000 UTC m=+2273.216215603" lastFinishedPulling="2025-11-21 14:09:59.5605811 +0000 UTC m=+2273.682113652" observedRunningTime="2025-11-21 14:10:00.050595099 +0000 UTC m=+2274.172127651" watchObservedRunningTime="2025-11-21 14:10:00.05515905 +0000 UTC m=+2274.176691612" Nov 21 14:10:03 crc kubenswrapper[4904]: I1121 14:10:03.513949 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:10:03 crc kubenswrapper[4904]: E1121 14:10:03.515411 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:10:14 crc kubenswrapper[4904]: I1121 14:10:14.514609 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:10:14 crc kubenswrapper[4904]: E1121 14:10:14.515807 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:10:28 crc kubenswrapper[4904]: I1121 14:10:28.514769 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:10:28 crc kubenswrapper[4904]: E1121 14:10:28.515845 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:10:41 crc kubenswrapper[4904]: I1121 14:10:41.046396 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-td4l8"] Nov 21 14:10:41 crc kubenswrapper[4904]: I1121 14:10:41.058537 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-td4l8"] Nov 21 14:10:42 crc kubenswrapper[4904]: I1121 14:10:42.513900 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:10:42 crc kubenswrapper[4904]: E1121 14:10:42.514700 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:10:42 crc kubenswrapper[4904]: I1121 14:10:42.532809 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6cc886d-3403-4dab-82a8-35aacd9e2bc1" path="/var/lib/kubelet/pods/a6cc886d-3403-4dab-82a8-35aacd9e2bc1/volumes" Nov 21 14:10:43 crc kubenswrapper[4904]: I1121 14:10:43.533602 4904 generic.go:334] "Generic (PLEG): container finished" podID="e930ad7a-dc96-49b3-9d1b-2fe2fca33545" containerID="c3ff4f3f3f834ba5339446318848db972fc6d25c45fbc2a07ff406aabe913f00" exitCode=0 Nov 21 14:10:43 crc kubenswrapper[4904]: I1121 14:10:43.533703 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" event={"ID":"e930ad7a-dc96-49b3-9d1b-2fe2fca33545","Type":"ContainerDied","Data":"c3ff4f3f3f834ba5339446318848db972fc6d25c45fbc2a07ff406aabe913f00"} Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.050911 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.169902 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-combined-ca-bundle\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.169958 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.169985 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-libvirt-combined-ca-bundle\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170028 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-bootstrap-combined-ca-bundle\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170077 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170144 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-inventory\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170178 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-repo-setup-combined-ca-bundle\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170215 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ssh-key\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170332 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ovn-combined-ca-bundle\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170376 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkrtv\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-kube-api-access-kkrtv\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170429 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170465 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.170498 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-power-monitoring-combined-ca-bundle\") pod \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\" (UID: \"e930ad7a-dc96-49b3-9d1b-2fe2fca33545\") " Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191154 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191238 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191251 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191309 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191316 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191352 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-kube-api-access-kkrtv" (OuterVolumeSpecName: "kube-api-access-kkrtv") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "kube-api-access-kkrtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191714 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.191968 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.192066 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.193364 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.199006 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.228969 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-inventory" (OuterVolumeSpecName: "inventory") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.231844 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e930ad7a-dc96-49b3-9d1b-2fe2fca33545" (UID: "e930ad7a-dc96-49b3-9d1b-2fe2fca33545"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273824 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273873 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273884 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273897 4904 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273906 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273918 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273927 4904 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273938 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273952 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273963 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkrtv\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-kube-api-access-kkrtv\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273974 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273984 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.273995 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e930ad7a-dc96-49b3-9d1b-2fe2fca33545-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.556645 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" event={"ID":"e930ad7a-dc96-49b3-9d1b-2fe2fca33545","Type":"ContainerDied","Data":"621b482986bc74d5e2b647143dedaa8084f76856566b99ac78062c6f138c3284"} Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.556704 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="621b482986bc74d5e2b647143dedaa8084f76856566b99ac78062c6f138c3284" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.556779 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.675136 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n"] Nov 21 14:10:45 crc kubenswrapper[4904]: E1121 14:10:45.675683 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e930ad7a-dc96-49b3-9d1b-2fe2fca33545" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.675703 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e930ad7a-dc96-49b3-9d1b-2fe2fca33545" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.675952 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e930ad7a-dc96-49b3-9d1b-2fe2fca33545" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.677048 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.679606 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.680733 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.680845 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.681304 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.682526 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.720728 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n"] Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.786330 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.786682 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.786852 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.786975 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5n2r\" (UniqueName: \"kubernetes.io/projected/86a68773-3a8c-402f-ba62-6bd46459d2c0-kube-api-access-f5n2r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.787343 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.889958 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.890079 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.890118 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.890215 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.890276 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5n2r\" (UniqueName: \"kubernetes.io/projected/86a68773-3a8c-402f-ba62-6bd46459d2c0-kube-api-access-f5n2r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.891275 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.896180 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.896254 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.896508 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:45 crc kubenswrapper[4904]: I1121 14:10:45.910031 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5n2r\" (UniqueName: \"kubernetes.io/projected/86a68773-3a8c-402f-ba62-6bd46459d2c0-kube-api-access-f5n2r\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-rgx5n\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:46 crc kubenswrapper[4904]: I1121 14:10:46.000586 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:10:46 crc kubenswrapper[4904]: I1121 14:10:46.603491 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n"] Nov 21 14:10:47 crc kubenswrapper[4904]: I1121 14:10:47.583811 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" event={"ID":"86a68773-3a8c-402f-ba62-6bd46459d2c0","Type":"ContainerStarted","Data":"9cf4a63321a7cb41764cbaa9d1932886f53a2d66c54872eaf1e6f8be06623e61"} Nov 21 14:10:47 crc kubenswrapper[4904]: I1121 14:10:47.584349 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" event={"ID":"86a68773-3a8c-402f-ba62-6bd46459d2c0","Type":"ContainerStarted","Data":"10b863248d21e8480c9215d64e4b49a54b3a0ad742636b394c1f27c7b6eea6c1"} Nov 21 14:10:47 crc kubenswrapper[4904]: I1121 14:10:47.617308 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" podStartSLOduration=2.138247736 podStartE2EDuration="2.617275686s" podCreationTimestamp="2025-11-21 14:10:45 +0000 UTC" firstStartedPulling="2025-11-21 14:10:46.611788315 +0000 UTC m=+2320.733320907" lastFinishedPulling="2025-11-21 14:10:47.090816305 +0000 UTC m=+2321.212348857" observedRunningTime="2025-11-21 14:10:47.60558871 +0000 UTC m=+2321.727121262" watchObservedRunningTime="2025-11-21 14:10:47.617275686 +0000 UTC m=+2321.738808258" Nov 21 14:10:55 crc kubenswrapper[4904]: I1121 14:10:55.514282 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:10:55 crc kubenswrapper[4904]: E1121 14:10:55.515827 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:11:08 crc kubenswrapper[4904]: I1121 14:11:08.513610 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:11:08 crc kubenswrapper[4904]: E1121 14:11:08.514741 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:11:23 crc kubenswrapper[4904]: I1121 14:11:23.514003 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:11:23 crc kubenswrapper[4904]: E1121 14:11:23.515151 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:11:28 crc kubenswrapper[4904]: I1121 14:11:28.773565 4904 scope.go:117] "RemoveContainer" containerID="762d7daf2b31677c4454baad94d67ce502d86b6b6ccdaad7bfb655acb9820efa" Nov 21 14:11:31 crc kubenswrapper[4904]: I1121 14:11:31.061952 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-g7z2w"] Nov 21 14:11:31 crc kubenswrapper[4904]: I1121 14:11:31.075945 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-g7z2w"] Nov 21 14:11:32 crc kubenswrapper[4904]: I1121 14:11:32.529368 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="246116d1-5334-46cf-bcb6-dd2223cf67d7" path="/var/lib/kubelet/pods/246116d1-5334-46cf-bcb6-dd2223cf67d7/volumes" Nov 21 14:11:38 crc kubenswrapper[4904]: I1121 14:11:38.514067 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:11:38 crc kubenswrapper[4904]: E1121 14:11:38.515473 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:11:52 crc kubenswrapper[4904]: I1121 14:11:52.513693 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:11:52 crc kubenswrapper[4904]: E1121 14:11:52.514711 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:12:05 crc kubenswrapper[4904]: I1121 14:12:05.447852 4904 generic.go:334] "Generic (PLEG): container finished" podID="86a68773-3a8c-402f-ba62-6bd46459d2c0" containerID="9cf4a63321a7cb41764cbaa9d1932886f53a2d66c54872eaf1e6f8be06623e61" exitCode=0 Nov 21 14:12:05 crc kubenswrapper[4904]: I1121 14:12:05.447983 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" event={"ID":"86a68773-3a8c-402f-ba62-6bd46459d2c0","Type":"ContainerDied","Data":"9cf4a63321a7cb41764cbaa9d1932886f53a2d66c54872eaf1e6f8be06623e61"} Nov 21 14:12:06 crc kubenswrapper[4904]: I1121 14:12:06.532221 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:12:06 crc kubenswrapper[4904]: E1121 14:12:06.532862 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:12:06 crc kubenswrapper[4904]: I1121 14:12:06.989366 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.073202 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-inventory\") pod \"86a68773-3a8c-402f-ba62-6bd46459d2c0\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.073247 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovn-combined-ca-bundle\") pod \"86a68773-3a8c-402f-ba62-6bd46459d2c0\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.073275 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ssh-key\") pod \"86a68773-3a8c-402f-ba62-6bd46459d2c0\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.073404 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5n2r\" (UniqueName: \"kubernetes.io/projected/86a68773-3a8c-402f-ba62-6bd46459d2c0-kube-api-access-f5n2r\") pod \"86a68773-3a8c-402f-ba62-6bd46459d2c0\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.073567 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovncontroller-config-0\") pod \"86a68773-3a8c-402f-ba62-6bd46459d2c0\" (UID: \"86a68773-3a8c-402f-ba62-6bd46459d2c0\") " Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.098846 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "86a68773-3a8c-402f-ba62-6bd46459d2c0" (UID: "86a68773-3a8c-402f-ba62-6bd46459d2c0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.098897 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a68773-3a8c-402f-ba62-6bd46459d2c0-kube-api-access-f5n2r" (OuterVolumeSpecName: "kube-api-access-f5n2r") pod "86a68773-3a8c-402f-ba62-6bd46459d2c0" (UID: "86a68773-3a8c-402f-ba62-6bd46459d2c0"). InnerVolumeSpecName "kube-api-access-f5n2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.119357 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "86a68773-3a8c-402f-ba62-6bd46459d2c0" (UID: "86a68773-3a8c-402f-ba62-6bd46459d2c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.136415 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "86a68773-3a8c-402f-ba62-6bd46459d2c0" (UID: "86a68773-3a8c-402f-ba62-6bd46459d2c0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.141905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-inventory" (OuterVolumeSpecName: "inventory") pod "86a68773-3a8c-402f-ba62-6bd46459d2c0" (UID: "86a68773-3a8c-402f-ba62-6bd46459d2c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.176738 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.176776 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.176787 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86a68773-3a8c-402f-ba62-6bd46459d2c0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.176798 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5n2r\" (UniqueName: \"kubernetes.io/projected/86a68773-3a8c-402f-ba62-6bd46459d2c0-kube-api-access-f5n2r\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.176809 4904 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/86a68773-3a8c-402f-ba62-6bd46459d2c0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.472390 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" event={"ID":"86a68773-3a8c-402f-ba62-6bd46459d2c0","Type":"ContainerDied","Data":"10b863248d21e8480c9215d64e4b49a54b3a0ad742636b394c1f27c7b6eea6c1"} Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.472453 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b863248d21e8480c9215d64e4b49a54b3a0ad742636b394c1f27c7b6eea6c1" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.472523 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.588673 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q"] Nov 21 14:12:07 crc kubenswrapper[4904]: E1121 14:12:07.591581 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a68773-3a8c-402f-ba62-6bd46459d2c0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.591629 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a68773-3a8c-402f-ba62-6bd46459d2c0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.592861 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a68773-3a8c-402f-ba62-6bd46459d2c0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.595265 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.623979 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.624535 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.624588 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.624643 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.624916 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.637797 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q"] Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.694166 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkgrw\" (UniqueName: \"kubernetes.io/projected/29e44d23-28ea-460f-9742-9292290aecda-kube-api-access-dkgrw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.694376 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.694934 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.695185 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.695290 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.797994 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkgrw\" (UniqueName: \"kubernetes.io/projected/29e44d23-28ea-460f-9742-9292290aecda-kube-api-access-dkgrw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.798098 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.798251 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.798338 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.798375 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.804303 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.804482 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.808431 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.815812 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.819266 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkgrw\" (UniqueName: \"kubernetes.io/projected/29e44d23-28ea-460f-9742-9292290aecda-kube-api-access-dkgrw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:07 crc kubenswrapper[4904]: I1121 14:12:07.989207 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:12:08 crc kubenswrapper[4904]: I1121 14:12:08.713115 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q"] Nov 21 14:12:09 crc kubenswrapper[4904]: I1121 14:12:09.492698 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" event={"ID":"29e44d23-28ea-460f-9742-9292290aecda","Type":"ContainerStarted","Data":"fe021ad644728d6af724d755652bd514ef3312f3ffe7642012f9894fe7eb88f8"} Nov 21 14:12:10 crc kubenswrapper[4904]: I1121 14:12:10.505100 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" event={"ID":"29e44d23-28ea-460f-9742-9292290aecda","Type":"ContainerStarted","Data":"116bd5f733f6c9f509b56a4eafd3fac5ee3226e35efce867c27f2ab5ae11ea68"} Nov 21 14:12:10 crc kubenswrapper[4904]: I1121 14:12:10.529085 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" podStartSLOduration=3.007331462 podStartE2EDuration="3.529057287s" podCreationTimestamp="2025-11-21 14:12:07 +0000 UTC" firstStartedPulling="2025-11-21 14:12:08.726567506 +0000 UTC m=+2402.848100058" lastFinishedPulling="2025-11-21 14:12:09.248293321 +0000 UTC m=+2403.369825883" observedRunningTime="2025-11-21 14:12:10.524894955 +0000 UTC m=+2404.646427517" watchObservedRunningTime="2025-11-21 14:12:10.529057287 +0000 UTC m=+2404.650589849" Nov 21 14:12:19 crc kubenswrapper[4904]: I1121 14:12:19.513038 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:12:19 crc kubenswrapper[4904]: E1121 14:12:19.515532 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:12:28 crc kubenswrapper[4904]: I1121 14:12:28.857954 4904 scope.go:117] "RemoveContainer" containerID="4ba5431600e6a2f6cc6996d335ae22dc3d3d1a58cbb442b8052442e67bc9d915" Nov 21 14:12:31 crc kubenswrapper[4904]: I1121 14:12:31.526157 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:12:31 crc kubenswrapper[4904]: E1121 14:12:31.527722 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:12:46 crc kubenswrapper[4904]: I1121 14:12:46.520711 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:12:46 crc kubenswrapper[4904]: E1121 14:12:46.521691 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:12:59 crc kubenswrapper[4904]: I1121 14:12:59.533352 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:12:59 crc kubenswrapper[4904]: E1121 14:12:59.534994 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:13:10 crc kubenswrapper[4904]: I1121 14:13:10.513262 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:13:10 crc kubenswrapper[4904]: E1121 14:13:10.514447 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:13:24 crc kubenswrapper[4904]: I1121 14:13:24.514408 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:13:24 crc kubenswrapper[4904]: E1121 14:13:24.515795 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.511126 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxvxm"] Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.526127 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.546981 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxvxm"] Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.643218 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-catalog-content\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.643769 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-utilities\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.643811 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfjc\" (UniqueName: \"kubernetes.io/projected/50b803da-5278-48c2-9c90-fab083040cbb-kube-api-access-jrfjc\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.745860 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-utilities\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.745947 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrfjc\" (UniqueName: \"kubernetes.io/projected/50b803da-5278-48c2-9c90-fab083040cbb-kube-api-access-jrfjc\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.746237 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-catalog-content\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.746488 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-utilities\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.746872 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-catalog-content\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.775137 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrfjc\" (UniqueName: \"kubernetes.io/projected/50b803da-5278-48c2-9c90-fab083040cbb-kube-api-access-jrfjc\") pod \"redhat-operators-dxvxm\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:36 crc kubenswrapper[4904]: I1121 14:13:36.864676 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:37 crc kubenswrapper[4904]: I1121 14:13:37.413893 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxvxm"] Nov 21 14:13:37 crc kubenswrapper[4904]: I1121 14:13:37.514178 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:13:37 crc kubenswrapper[4904]: E1121 14:13:37.514721 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:13:37 crc kubenswrapper[4904]: I1121 14:13:37.655405 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerStarted","Data":"85e8ba8b1bd9593532f25364db06a62ef0e82fd4252947709a1402b94d901c3b"} Nov 21 14:13:38 crc kubenswrapper[4904]: I1121 14:13:38.668101 4904 generic.go:334] "Generic (PLEG): container finished" podID="50b803da-5278-48c2-9c90-fab083040cbb" containerID="6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17" exitCode=0 Nov 21 14:13:38 crc kubenswrapper[4904]: I1121 14:13:38.668207 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerDied","Data":"6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17"} Nov 21 14:13:40 crc kubenswrapper[4904]: I1121 14:13:40.709531 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerStarted","Data":"d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2"} Nov 21 14:13:46 crc kubenswrapper[4904]: I1121 14:13:46.783293 4904 generic.go:334] "Generic (PLEG): container finished" podID="50b803da-5278-48c2-9c90-fab083040cbb" containerID="d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2" exitCode=0 Nov 21 14:13:46 crc kubenswrapper[4904]: I1121 14:13:46.784398 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerDied","Data":"d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2"} Nov 21 14:13:48 crc kubenswrapper[4904]: I1121 14:13:48.516234 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:13:48 crc kubenswrapper[4904]: E1121 14:13:48.519361 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:13:48 crc kubenswrapper[4904]: I1121 14:13:48.811830 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerStarted","Data":"09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9"} Nov 21 14:13:48 crc kubenswrapper[4904]: I1121 14:13:48.836408 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxvxm" podStartSLOduration=3.606277088 podStartE2EDuration="12.836383227s" podCreationTimestamp="2025-11-21 14:13:36 +0000 UTC" firstStartedPulling="2025-11-21 14:13:38.672091102 +0000 UTC m=+2492.793623674" lastFinishedPulling="2025-11-21 14:13:47.902197241 +0000 UTC m=+2502.023729813" observedRunningTime="2025-11-21 14:13:48.830154275 +0000 UTC m=+2502.951686847" watchObservedRunningTime="2025-11-21 14:13:48.836383227 +0000 UTC m=+2502.957915789" Nov 21 14:13:56 crc kubenswrapper[4904]: I1121 14:13:56.866006 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:56 crc kubenswrapper[4904]: I1121 14:13:56.868617 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:56 crc kubenswrapper[4904]: I1121 14:13:56.925779 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:56 crc kubenswrapper[4904]: I1121 14:13:56.981372 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:57 crc kubenswrapper[4904]: I1121 14:13:57.167891 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxvxm"] Nov 21 14:13:58 crc kubenswrapper[4904]: I1121 14:13:58.938579 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxvxm" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="registry-server" containerID="cri-o://09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9" gracePeriod=2 Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.522669 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.588032 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-utilities\") pod \"50b803da-5278-48c2-9c90-fab083040cbb\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.588182 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrfjc\" (UniqueName: \"kubernetes.io/projected/50b803da-5278-48c2-9c90-fab083040cbb-kube-api-access-jrfjc\") pod \"50b803da-5278-48c2-9c90-fab083040cbb\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.588992 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-utilities" (OuterVolumeSpecName: "utilities") pod "50b803da-5278-48c2-9c90-fab083040cbb" (UID: "50b803da-5278-48c2-9c90-fab083040cbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.590162 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-catalog-content\") pod \"50b803da-5278-48c2-9c90-fab083040cbb\" (UID: \"50b803da-5278-48c2-9c90-fab083040cbb\") " Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.591578 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.602059 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b803da-5278-48c2-9c90-fab083040cbb-kube-api-access-jrfjc" (OuterVolumeSpecName: "kube-api-access-jrfjc") pod "50b803da-5278-48c2-9c90-fab083040cbb" (UID: "50b803da-5278-48c2-9c90-fab083040cbb"). InnerVolumeSpecName "kube-api-access-jrfjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.694186 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrfjc\" (UniqueName: \"kubernetes.io/projected/50b803da-5278-48c2-9c90-fab083040cbb-kube-api-access-jrfjc\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.694713 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50b803da-5278-48c2-9c90-fab083040cbb" (UID: "50b803da-5278-48c2-9c90-fab083040cbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.796299 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50b803da-5278-48c2-9c90-fab083040cbb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.954070 4904 generic.go:334] "Generic (PLEG): container finished" podID="50b803da-5278-48c2-9c90-fab083040cbb" containerID="09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9" exitCode=0 Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.954123 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerDied","Data":"09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9"} Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.954160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxvxm" event={"ID":"50b803da-5278-48c2-9c90-fab083040cbb","Type":"ContainerDied","Data":"85e8ba8b1bd9593532f25364db06a62ef0e82fd4252947709a1402b94d901c3b"} Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.954173 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxvxm" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.954185 4904 scope.go:117] "RemoveContainer" containerID="09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9" Nov 21 14:13:59 crc kubenswrapper[4904]: I1121 14:13:59.979317 4904 scope.go:117] "RemoveContainer" containerID="d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.001413 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxvxm"] Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.012441 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxvxm"] Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.024366 4904 scope.go:117] "RemoveContainer" containerID="6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.080845 4904 scope.go:117] "RemoveContainer" containerID="09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9" Nov 21 14:14:00 crc kubenswrapper[4904]: E1121 14:14:00.082038 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9\": container with ID starting with 09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9 not found: ID does not exist" containerID="09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.082120 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9"} err="failed to get container status \"09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9\": rpc error: code = NotFound desc = could not find container \"09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9\": container with ID starting with 09f6f7cfec068d9f40ffda9d185be22f55fa0c25f2ee7cda617436f48c7ffeb9 not found: ID does not exist" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.082180 4904 scope.go:117] "RemoveContainer" containerID="d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2" Nov 21 14:14:00 crc kubenswrapper[4904]: E1121 14:14:00.082715 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2\": container with ID starting with d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2 not found: ID does not exist" containerID="d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.082747 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2"} err="failed to get container status \"d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2\": rpc error: code = NotFound desc = could not find container \"d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2\": container with ID starting with d46f3ec1207290cd6fe40c4055fac59aea68b2a2bb37f77c79bbe117d8087cd2 not found: ID does not exist" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.082767 4904 scope.go:117] "RemoveContainer" containerID="6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17" Nov 21 14:14:00 crc kubenswrapper[4904]: E1121 14:14:00.083272 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17\": container with ID starting with 6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17 not found: ID does not exist" containerID="6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.083317 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17"} err="failed to get container status \"6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17\": rpc error: code = NotFound desc = could not find container \"6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17\": container with ID starting with 6705656d9e404b0cc6a0d3ffb098e1ec83f44fc5f543230ecfc5fae875453e17 not found: ID does not exist" Nov 21 14:14:00 crc kubenswrapper[4904]: I1121 14:14:00.538006 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b803da-5278-48c2-9c90-fab083040cbb" path="/var/lib/kubelet/pods/50b803da-5278-48c2-9c90-fab083040cbb/volumes" Nov 21 14:14:01 crc kubenswrapper[4904]: I1121 14:14:01.513491 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:14:01 crc kubenswrapper[4904]: I1121 14:14:01.990484 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"b4ff6c2748eb1b91f361617b938e7a87dd68f8ac136364d295e380f97e4b020c"} Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.447536 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rwq78"] Nov 21 14:14:46 crc kubenswrapper[4904]: E1121 14:14:46.449308 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="extract-content" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.449330 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="extract-content" Nov 21 14:14:46 crc kubenswrapper[4904]: E1121 14:14:46.449353 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="registry-server" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.449362 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="registry-server" Nov 21 14:14:46 crc kubenswrapper[4904]: E1121 14:14:46.449381 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="extract-utilities" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.449393 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="extract-utilities" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.449834 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b803da-5278-48c2-9c90-fab083040cbb" containerName="registry-server" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.452446 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.460230 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwq78"] Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.652553 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lnvc\" (UniqueName: \"kubernetes.io/projected/c4bdd113-72e3-46a4-9f22-804a6f6c037e-kube-api-access-5lnvc\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.652710 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-utilities\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.652881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-catalog-content\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.756194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-catalog-content\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.756399 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lnvc\" (UniqueName: \"kubernetes.io/projected/c4bdd113-72e3-46a4-9f22-804a6f6c037e-kube-api-access-5lnvc\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.756438 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-utilities\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.756868 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-catalog-content\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.756957 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-utilities\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.780748 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lnvc\" (UniqueName: \"kubernetes.io/projected/c4bdd113-72e3-46a4-9f22-804a6f6c037e-kube-api-access-5lnvc\") pod \"redhat-marketplace-rwq78\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:46 crc kubenswrapper[4904]: I1121 14:14:46.789829 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:47 crc kubenswrapper[4904]: I1121 14:14:47.446273 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwq78"] Nov 21 14:14:47 crc kubenswrapper[4904]: I1121 14:14:47.548820 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwq78" event={"ID":"c4bdd113-72e3-46a4-9f22-804a6f6c037e","Type":"ContainerStarted","Data":"05837a61d1925d1bfa9daa06ae8610c93a5cfae5e2de526641031497f7b59387"} Nov 21 14:14:48 crc kubenswrapper[4904]: I1121 14:14:48.563101 4904 generic.go:334] "Generic (PLEG): container finished" podID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerID="4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e" exitCode=0 Nov 21 14:14:48 crc kubenswrapper[4904]: I1121 14:14:48.563162 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwq78" event={"ID":"c4bdd113-72e3-46a4-9f22-804a6f6c037e","Type":"ContainerDied","Data":"4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e"} Nov 21 14:14:48 crc kubenswrapper[4904]: I1121 14:14:48.566839 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:14:50 crc kubenswrapper[4904]: I1121 14:14:50.588787 4904 generic.go:334] "Generic (PLEG): container finished" podID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerID="d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b" exitCode=0 Nov 21 14:14:50 crc kubenswrapper[4904]: I1121 14:14:50.588882 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwq78" event={"ID":"c4bdd113-72e3-46a4-9f22-804a6f6c037e","Type":"ContainerDied","Data":"d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b"} Nov 21 14:14:53 crc kubenswrapper[4904]: I1121 14:14:53.631686 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwq78" event={"ID":"c4bdd113-72e3-46a4-9f22-804a6f6c037e","Type":"ContainerStarted","Data":"78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a"} Nov 21 14:14:53 crc kubenswrapper[4904]: I1121 14:14:53.666968 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rwq78" podStartSLOduration=4.26740749 podStartE2EDuration="7.666942345s" podCreationTimestamp="2025-11-21 14:14:46 +0000 UTC" firstStartedPulling="2025-11-21 14:14:48.566114405 +0000 UTC m=+2562.687646977" lastFinishedPulling="2025-11-21 14:14:51.96564926 +0000 UTC m=+2566.087181832" observedRunningTime="2025-11-21 14:14:53.6573577 +0000 UTC m=+2567.778890282" watchObservedRunningTime="2025-11-21 14:14:53.666942345 +0000 UTC m=+2567.788474887" Nov 21 14:14:56 crc kubenswrapper[4904]: I1121 14:14:56.790036 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:56 crc kubenswrapper[4904]: I1121 14:14:56.790921 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:56 crc kubenswrapper[4904]: I1121 14:14:56.848179 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:57 crc kubenswrapper[4904]: I1121 14:14:57.779489 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:14:57 crc kubenswrapper[4904]: I1121 14:14:57.851302 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwq78"] Nov 21 14:14:59 crc kubenswrapper[4904]: I1121 14:14:59.709243 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rwq78" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="registry-server" containerID="cri-o://78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a" gracePeriod=2 Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.190705 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn"] Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.192693 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.196762 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.201428 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.213944 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn"] Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.283139 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-secret-volume\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.283377 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-config-volume\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.283446 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvzwf\" (UniqueName: \"kubernetes.io/projected/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-kube-api-access-tvzwf\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.385610 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-config-volume\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.385731 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvzwf\" (UniqueName: \"kubernetes.io/projected/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-kube-api-access-tvzwf\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.385791 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-secret-volume\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.390868 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-config-volume\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.395225 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.396561 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-secret-volume\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.412500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvzwf\" (UniqueName: \"kubernetes.io/projected/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-kube-api-access-tvzwf\") pod \"collect-profiles-29395575-2bmgn\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.500368 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-utilities\") pod \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.500613 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lnvc\" (UniqueName: \"kubernetes.io/projected/c4bdd113-72e3-46a4-9f22-804a6f6c037e-kube-api-access-5lnvc\") pod \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.500711 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-catalog-content\") pod \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\" (UID: \"c4bdd113-72e3-46a4-9f22-804a6f6c037e\") " Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.504713 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-utilities" (OuterVolumeSpecName: "utilities") pod "c4bdd113-72e3-46a4-9f22-804a6f6c037e" (UID: "c4bdd113-72e3-46a4-9f22-804a6f6c037e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.510611 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bdd113-72e3-46a4-9f22-804a6f6c037e-kube-api-access-5lnvc" (OuterVolumeSpecName: "kube-api-access-5lnvc") pod "c4bdd113-72e3-46a4-9f22-804a6f6c037e" (UID: "c4bdd113-72e3-46a4-9f22-804a6f6c037e"). InnerVolumeSpecName "kube-api-access-5lnvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.517951 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.529589 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4bdd113-72e3-46a4-9f22-804a6f6c037e" (UID: "c4bdd113-72e3-46a4-9f22-804a6f6c037e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.604745 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lnvc\" (UniqueName: \"kubernetes.io/projected/c4bdd113-72e3-46a4-9f22-804a6f6c037e-kube-api-access-5lnvc\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.604773 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.604782 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4bdd113-72e3-46a4-9f22-804a6f6c037e-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.726502 4904 generic.go:334] "Generic (PLEG): container finished" podID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerID="78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a" exitCode=0 Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.726597 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwq78" event={"ID":"c4bdd113-72e3-46a4-9f22-804a6f6c037e","Type":"ContainerDied","Data":"78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a"} Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.726980 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwq78" event={"ID":"c4bdd113-72e3-46a4-9f22-804a6f6c037e","Type":"ContainerDied","Data":"05837a61d1925d1bfa9daa06ae8610c93a5cfae5e2de526641031497f7b59387"} Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.727014 4904 scope.go:117] "RemoveContainer" containerID="78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.726707 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwq78" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.762124 4904 scope.go:117] "RemoveContainer" containerID="d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.779284 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwq78"] Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.789326 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwq78"] Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.801457 4904 scope.go:117] "RemoveContainer" containerID="4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.844735 4904 scope.go:117] "RemoveContainer" containerID="78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a" Nov 21 14:15:00 crc kubenswrapper[4904]: E1121 14:15:00.845375 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a\": container with ID starting with 78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a not found: ID does not exist" containerID="78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.845417 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a"} err="failed to get container status \"78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a\": rpc error: code = NotFound desc = could not find container \"78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a\": container with ID starting with 78e31a9d4ca33d2a34492145b453188ef9ff3cd734189d8d4c5df5273412537a not found: ID does not exist" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.845450 4904 scope.go:117] "RemoveContainer" containerID="d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b" Nov 21 14:15:00 crc kubenswrapper[4904]: E1121 14:15:00.845931 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b\": container with ID starting with d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b not found: ID does not exist" containerID="d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.845962 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b"} err="failed to get container status \"d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b\": rpc error: code = NotFound desc = could not find container \"d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b\": container with ID starting with d09622d397c8dd673370a9e6e710de7ed18e34a6bcd76a6b96eeb7e161b4104b not found: ID does not exist" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.845988 4904 scope.go:117] "RemoveContainer" containerID="4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e" Nov 21 14:15:00 crc kubenswrapper[4904]: E1121 14:15:00.846447 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e\": container with ID starting with 4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e not found: ID does not exist" containerID="4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.846480 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e"} err="failed to get container status \"4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e\": rpc error: code = NotFound desc = could not find container \"4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e\": container with ID starting with 4ece64f3e4f2019f746b3961085d734adb929708d530cd483c825ff803ef820e not found: ID does not exist" Nov 21 14:15:00 crc kubenswrapper[4904]: I1121 14:15:00.980208 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn"] Nov 21 14:15:00 crc kubenswrapper[4904]: W1121 14:15:00.986235 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37cbcccf_c50a_4124_9689_6dd7ba9a80f5.slice/crio-de1684884682259c3af01690ce3fec5d3dc64857690836e9d57b092a44081bf2 WatchSource:0}: Error finding container de1684884682259c3af01690ce3fec5d3dc64857690836e9d57b092a44081bf2: Status 404 returned error can't find the container with id de1684884682259c3af01690ce3fec5d3dc64857690836e9d57b092a44081bf2 Nov 21 14:15:01 crc kubenswrapper[4904]: I1121 14:15:01.743057 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" event={"ID":"37cbcccf-c50a-4124-9689-6dd7ba9a80f5","Type":"ContainerStarted","Data":"9450912097a3063fa337ba87946c002d6889d2cc6d846fbd629e2f4614cc3638"} Nov 21 14:15:01 crc kubenswrapper[4904]: I1121 14:15:01.743548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" event={"ID":"37cbcccf-c50a-4124-9689-6dd7ba9a80f5","Type":"ContainerStarted","Data":"de1684884682259c3af01690ce3fec5d3dc64857690836e9d57b092a44081bf2"} Nov 21 14:15:01 crc kubenswrapper[4904]: I1121 14:15:01.766801 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" podStartSLOduration=1.766773631 podStartE2EDuration="1.766773631s" podCreationTimestamp="2025-11-21 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:15:01.763345057 +0000 UTC m=+2575.884877619" watchObservedRunningTime="2025-11-21 14:15:01.766773631 +0000 UTC m=+2575.888306173" Nov 21 14:15:02 crc kubenswrapper[4904]: I1121 14:15:02.530555 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" path="/var/lib/kubelet/pods/c4bdd113-72e3-46a4-9f22-804a6f6c037e/volumes" Nov 21 14:15:02 crc kubenswrapper[4904]: I1121 14:15:02.760993 4904 generic.go:334] "Generic (PLEG): container finished" podID="37cbcccf-c50a-4124-9689-6dd7ba9a80f5" containerID="9450912097a3063fa337ba87946c002d6889d2cc6d846fbd629e2f4614cc3638" exitCode=0 Nov 21 14:15:02 crc kubenswrapper[4904]: I1121 14:15:02.761059 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" event={"ID":"37cbcccf-c50a-4124-9689-6dd7ba9a80f5","Type":"ContainerDied","Data":"9450912097a3063fa337ba87946c002d6889d2cc6d846fbd629e2f4614cc3638"} Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.221151 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.298271 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-config-volume\") pod \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.298369 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-secret-volume\") pod \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.298502 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvzwf\" (UniqueName: \"kubernetes.io/projected/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-kube-api-access-tvzwf\") pod \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\" (UID: \"37cbcccf-c50a-4124-9689-6dd7ba9a80f5\") " Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.299348 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "37cbcccf-c50a-4124-9689-6dd7ba9a80f5" (UID: "37cbcccf-c50a-4124-9689-6dd7ba9a80f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.308469 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-kube-api-access-tvzwf" (OuterVolumeSpecName: "kube-api-access-tvzwf") pod "37cbcccf-c50a-4124-9689-6dd7ba9a80f5" (UID: "37cbcccf-c50a-4124-9689-6dd7ba9a80f5"). InnerVolumeSpecName "kube-api-access-tvzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.308562 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37cbcccf-c50a-4124-9689-6dd7ba9a80f5" (UID: "37cbcccf-c50a-4124-9689-6dd7ba9a80f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.402086 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.402422 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.402503 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvzwf\" (UniqueName: \"kubernetes.io/projected/37cbcccf-c50a-4124-9689-6dd7ba9a80f5-kube-api-access-tvzwf\") on node \"crc\" DevicePath \"\"" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.796893 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" event={"ID":"37cbcccf-c50a-4124-9689-6dd7ba9a80f5","Type":"ContainerDied","Data":"de1684884682259c3af01690ce3fec5d3dc64857690836e9d57b092a44081bf2"} Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.796942 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de1684884682259c3af01690ce3fec5d3dc64857690836e9d57b092a44081bf2" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.797010 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn" Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.866402 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r"] Nov 21 14:15:04 crc kubenswrapper[4904]: I1121 14:15:04.876830 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395530-mwq7r"] Nov 21 14:15:06 crc kubenswrapper[4904]: I1121 14:15:06.532287 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19a8a38-94ec-4377-8f90-24f34f5fb547" path="/var/lib/kubelet/pods/d19a8a38-94ec-4377-8f90-24f34f5fb547/volumes" Nov 21 14:15:29 crc kubenswrapper[4904]: I1121 14:15:29.019926 4904 scope.go:117] "RemoveContainer" containerID="acf9d917cece611e3714a2947789ed917d1cab98addad2c33f86b497f248379b" Nov 21 14:16:28 crc kubenswrapper[4904]: I1121 14:16:28.114095 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:16:28 crc kubenswrapper[4904]: I1121 14:16:28.114898 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:16:58 crc kubenswrapper[4904]: I1121 14:16:58.113517 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:16:58 crc kubenswrapper[4904]: I1121 14:16:58.114281 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:17:10 crc kubenswrapper[4904]: I1121 14:17:10.319281 4904 generic.go:334] "Generic (PLEG): container finished" podID="29e44d23-28ea-460f-9742-9292290aecda" containerID="116bd5f733f6c9f509b56a4eafd3fac5ee3226e35efce867c27f2ab5ae11ea68" exitCode=0 Nov 21 14:17:10 crc kubenswrapper[4904]: I1121 14:17:10.319366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" event={"ID":"29e44d23-28ea-460f-9742-9292290aecda","Type":"ContainerDied","Data":"116bd5f733f6c9f509b56a4eafd3fac5ee3226e35efce867c27f2ab5ae11ea68"} Nov 21 14:17:11 crc kubenswrapper[4904]: I1121 14:17:11.931029 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.068742 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-ssh-key\") pod \"29e44d23-28ea-460f-9742-9292290aecda\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.068858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-combined-ca-bundle\") pod \"29e44d23-28ea-460f-9742-9292290aecda\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.069058 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-inventory\") pod \"29e44d23-28ea-460f-9742-9292290aecda\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.070010 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkgrw\" (UniqueName: \"kubernetes.io/projected/29e44d23-28ea-460f-9742-9292290aecda-kube-api-access-dkgrw\") pod \"29e44d23-28ea-460f-9742-9292290aecda\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.070199 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-secret-0\") pod \"29e44d23-28ea-460f-9742-9292290aecda\" (UID: \"29e44d23-28ea-460f-9742-9292290aecda\") " Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.082883 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "29e44d23-28ea-460f-9742-9292290aecda" (UID: "29e44d23-28ea-460f-9742-9292290aecda"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.089667 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e44d23-28ea-460f-9742-9292290aecda-kube-api-access-dkgrw" (OuterVolumeSpecName: "kube-api-access-dkgrw") pod "29e44d23-28ea-460f-9742-9292290aecda" (UID: "29e44d23-28ea-460f-9742-9292290aecda"). InnerVolumeSpecName "kube-api-access-dkgrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.106161 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-inventory" (OuterVolumeSpecName: "inventory") pod "29e44d23-28ea-460f-9742-9292290aecda" (UID: "29e44d23-28ea-460f-9742-9292290aecda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.112166 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "29e44d23-28ea-460f-9742-9292290aecda" (UID: "29e44d23-28ea-460f-9742-9292290aecda"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.123554 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "29e44d23-28ea-460f-9742-9292290aecda" (UID: "29e44d23-28ea-460f-9742-9292290aecda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.173982 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.174358 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.174532 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkgrw\" (UniqueName: \"kubernetes.io/projected/29e44d23-28ea-460f-9742-9292290aecda-kube-api-access-dkgrw\") on node \"crc\" DevicePath \"\"" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.174633 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.174786 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/29e44d23-28ea-460f-9742-9292290aecda-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.342803 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" event={"ID":"29e44d23-28ea-460f-9742-9292290aecda","Type":"ContainerDied","Data":"fe021ad644728d6af724d755652bd514ef3312f3ffe7642012f9894fe7eb88f8"} Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.342869 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe021ad644728d6af724d755652bd514ef3312f3ffe7642012f9894fe7eb88f8" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.343320 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.464917 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np"] Nov 21 14:17:12 crc kubenswrapper[4904]: E1121 14:17:12.465581 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="extract-utilities" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.465605 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="extract-utilities" Nov 21 14:17:12 crc kubenswrapper[4904]: E1121 14:17:12.465644 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37cbcccf-c50a-4124-9689-6dd7ba9a80f5" containerName="collect-profiles" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.465675 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="37cbcccf-c50a-4124-9689-6dd7ba9a80f5" containerName="collect-profiles" Nov 21 14:17:12 crc kubenswrapper[4904]: E1121 14:17:12.465688 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29e44d23-28ea-460f-9742-9292290aecda" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.465697 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e44d23-28ea-460f-9742-9292290aecda" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:17:12 crc kubenswrapper[4904]: E1121 14:17:12.465732 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="registry-server" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.465738 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="registry-server" Nov 21 14:17:12 crc kubenswrapper[4904]: E1121 14:17:12.465750 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="extract-content" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.465758 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="extract-content" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.465994 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e44d23-28ea-460f-9742-9292290aecda" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.466011 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="37cbcccf-c50a-4124-9689-6dd7ba9a80f5" containerName="collect-profiles" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.466044 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bdd113-72e3-46a4-9f22-804a6f6c037e" containerName="registry-server" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.467044 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.469932 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.470144 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.470472 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.470864 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.471046 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.497868 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np"] Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.586678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.587094 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqdkf\" (UniqueName: \"kubernetes.io/projected/7fc13b06-871e-4313-a9fd-2d87ba4347da-kube-api-access-mqdkf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.587185 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.587466 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.587735 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.587957 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.588088 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.690603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.691230 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.691322 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.691400 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqdkf\" (UniqueName: \"kubernetes.io/projected/7fc13b06-871e-4313-a9fd-2d87ba4347da-kube-api-access-mqdkf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.691437 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.691590 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.691788 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.697677 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.698834 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.700339 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.705321 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.700278 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.716636 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqdkf\" (UniqueName: \"kubernetes.io/projected/7fc13b06-871e-4313-a9fd-2d87ba4347da-kube-api-access-mqdkf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.717012 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g52np\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:12 crc kubenswrapper[4904]: I1121 14:17:12.792154 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:17:13 crc kubenswrapper[4904]: I1121 14:17:13.503302 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np"] Nov 21 14:17:14 crc kubenswrapper[4904]: I1121 14:17:14.385256 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" event={"ID":"7fc13b06-871e-4313-a9fd-2d87ba4347da","Type":"ContainerStarted","Data":"533a35c2c018fc026dea9cefb47cfb3f6d66d643d7475766d85887193c3423ba"} Nov 21 14:17:15 crc kubenswrapper[4904]: I1121 14:17:15.414804 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" event={"ID":"7fc13b06-871e-4313-a9fd-2d87ba4347da","Type":"ContainerStarted","Data":"ae6bce2a2ebc262a61d1e6d7c46787a1b8eb268af1907c1ec71095ad2538c1cb"} Nov 21 14:17:15 crc kubenswrapper[4904]: I1121 14:17:15.929896 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" podStartSLOduration=2.870726674 podStartE2EDuration="3.929866832s" podCreationTimestamp="2025-11-21 14:17:12 +0000 UTC" firstStartedPulling="2025-11-21 14:17:13.510937651 +0000 UTC m=+2707.632470203" lastFinishedPulling="2025-11-21 14:17:14.570077809 +0000 UTC m=+2708.691610361" observedRunningTime="2025-11-21 14:17:15.920049482 +0000 UTC m=+2710.041582024" watchObservedRunningTime="2025-11-21 14:17:15.929866832 +0000 UTC m=+2710.051399394" Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.113985 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.114887 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.114961 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.116300 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4ff6c2748eb1b91f361617b938e7a87dd68f8ac136364d295e380f97e4b020c"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.116371 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://b4ff6c2748eb1b91f361617b938e7a87dd68f8ac136364d295e380f97e4b020c" gracePeriod=600 Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.571622 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="b4ff6c2748eb1b91f361617b938e7a87dd68f8ac136364d295e380f97e4b020c" exitCode=0 Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.571697 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"b4ff6c2748eb1b91f361617b938e7a87dd68f8ac136364d295e380f97e4b020c"} Nov 21 14:17:28 crc kubenswrapper[4904]: I1121 14:17:28.572421 4904 scope.go:117] "RemoveContainer" containerID="4a9fc3d7f5655c7d9530c2c34a43142ab4ab062cc8bde38440c31b945f15005b" Nov 21 14:17:29 crc kubenswrapper[4904]: I1121 14:17:29.585020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d"} Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.567122 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gksdb"] Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.571156 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.585405 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gksdb"] Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.680377 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj4gg\" (UniqueName: \"kubernetes.io/projected/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-kube-api-access-dj4gg\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.680536 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-catalog-content\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.680619 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-utilities\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.782320 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-catalog-content\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.782405 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-utilities\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.782535 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj4gg\" (UniqueName: \"kubernetes.io/projected/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-kube-api-access-dj4gg\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.783305 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-utilities\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.783299 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-catalog-content\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.806145 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj4gg\" (UniqueName: \"kubernetes.io/projected/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-kube-api-access-dj4gg\") pod \"community-operators-gksdb\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:51 crc kubenswrapper[4904]: I1121 14:17:51.919545 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:17:52 crc kubenswrapper[4904]: I1121 14:17:52.550343 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gksdb"] Nov 21 14:17:52 crc kubenswrapper[4904]: I1121 14:17:52.868519 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerStarted","Data":"c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5"} Nov 21 14:17:52 crc kubenswrapper[4904]: I1121 14:17:52.869040 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerStarted","Data":"6abc4c36f3b2a0b22aab028e6704c6f154a8b6c2f6ff7d9212535e89036e5d60"} Nov 21 14:17:53 crc kubenswrapper[4904]: I1121 14:17:53.888537 4904 generic.go:334] "Generic (PLEG): container finished" podID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerID="c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5" exitCode=0 Nov 21 14:17:53 crc kubenswrapper[4904]: I1121 14:17:53.888617 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerDied","Data":"c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5"} Nov 21 14:17:53 crc kubenswrapper[4904]: I1121 14:17:53.958738 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92jxx"] Nov 21 14:17:53 crc kubenswrapper[4904]: I1121 14:17:53.961948 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:53 crc kubenswrapper[4904]: I1121 14:17:53.974066 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92jxx"] Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.150977 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-catalog-content\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.152355 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-utilities\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.152617 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn4xk\" (UniqueName: \"kubernetes.io/projected/e9a59072-07ea-4ac8-b7ca-216819ef6530-kube-api-access-qn4xk\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.254569 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-catalog-content\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.254694 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-utilities\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.254806 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn4xk\" (UniqueName: \"kubernetes.io/projected/e9a59072-07ea-4ac8-b7ca-216819ef6530-kube-api-access-qn4xk\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.255331 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-catalog-content\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.255375 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-utilities\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.279887 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn4xk\" (UniqueName: \"kubernetes.io/projected/e9a59072-07ea-4ac8-b7ca-216819ef6530-kube-api-access-qn4xk\") pod \"certified-operators-92jxx\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.286845 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.843734 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92jxx"] Nov 21 14:17:54 crc kubenswrapper[4904]: I1121 14:17:54.955822 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerStarted","Data":"6f00d579af4348ef77c5ba8ca6fc26f240325e8f3877c6a6a99743f2ddd57f98"} Nov 21 14:17:55 crc kubenswrapper[4904]: I1121 14:17:55.969578 4904 generic.go:334] "Generic (PLEG): container finished" podID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerID="1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c" exitCode=0 Nov 21 14:17:55 crc kubenswrapper[4904]: I1121 14:17:55.969675 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerDied","Data":"1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c"} Nov 21 14:17:55 crc kubenswrapper[4904]: I1121 14:17:55.972780 4904 generic.go:334] "Generic (PLEG): container finished" podID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerID="adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5" exitCode=0 Nov 21 14:17:55 crc kubenswrapper[4904]: I1121 14:17:55.972811 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerDied","Data":"adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5"} Nov 21 14:17:56 crc kubenswrapper[4904]: I1121 14:17:56.988149 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerStarted","Data":"8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18"} Nov 21 14:17:57 crc kubenswrapper[4904]: I1121 14:17:57.012664 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gksdb" podStartSLOduration=3.262174104 podStartE2EDuration="6.012630815s" podCreationTimestamp="2025-11-21 14:17:51 +0000 UTC" firstStartedPulling="2025-11-21 14:17:53.892157184 +0000 UTC m=+2748.013689736" lastFinishedPulling="2025-11-21 14:17:56.642613885 +0000 UTC m=+2750.764146447" observedRunningTime="2025-11-21 14:17:57.007398077 +0000 UTC m=+2751.128930629" watchObservedRunningTime="2025-11-21 14:17:57.012630815 +0000 UTC m=+2751.134163367" Nov 21 14:17:58 crc kubenswrapper[4904]: I1121 14:17:58.012545 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerStarted","Data":"e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b"} Nov 21 14:18:01 crc kubenswrapper[4904]: I1121 14:18:01.920069 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:18:01 crc kubenswrapper[4904]: I1121 14:18:01.921034 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:18:01 crc kubenswrapper[4904]: I1121 14:18:01.976132 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:18:02 crc kubenswrapper[4904]: I1121 14:18:02.059673 4904 generic.go:334] "Generic (PLEG): container finished" podID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerID="e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b" exitCode=0 Nov 21 14:18:02 crc kubenswrapper[4904]: I1121 14:18:02.059697 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerDied","Data":"e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b"} Nov 21 14:18:02 crc kubenswrapper[4904]: I1121 14:18:02.133076 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:18:03 crc kubenswrapper[4904]: I1121 14:18:03.076185 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerStarted","Data":"255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862"} Nov 21 14:18:03 crc kubenswrapper[4904]: I1121 14:18:03.100615 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92jxx" podStartSLOduration=3.512609478 podStartE2EDuration="10.100591477s" podCreationTimestamp="2025-11-21 14:17:53 +0000 UTC" firstStartedPulling="2025-11-21 14:17:55.972519452 +0000 UTC m=+2750.094052004" lastFinishedPulling="2025-11-21 14:18:02.560501451 +0000 UTC m=+2756.682034003" observedRunningTime="2025-11-21 14:18:03.097306987 +0000 UTC m=+2757.218839539" watchObservedRunningTime="2025-11-21 14:18:03.100591477 +0000 UTC m=+2757.222124029" Nov 21 14:18:03 crc kubenswrapper[4904]: I1121 14:18:03.750599 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gksdb"] Nov 21 14:18:04 crc kubenswrapper[4904]: I1121 14:18:04.287574 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:18:04 crc kubenswrapper[4904]: I1121 14:18:04.287674 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.096313 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gksdb" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="registry-server" containerID="cri-o://8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18" gracePeriod=2 Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.351109 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-92jxx" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="registry-server" probeResult="failure" output=< Nov 21 14:18:05 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:18:05 crc kubenswrapper[4904]: > Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.666344 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.757727 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-catalog-content\") pod \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.757793 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj4gg\" (UniqueName: \"kubernetes.io/projected/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-kube-api-access-dj4gg\") pod \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.757821 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-utilities\") pod \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\" (UID: \"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3\") " Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.759004 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-utilities" (OuterVolumeSpecName: "utilities") pod "7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" (UID: "7bbcc8dc-de13-48cd-8440-9df9da0f7cd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.765797 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-kube-api-access-dj4gg" (OuterVolumeSpecName: "kube-api-access-dj4gg") pod "7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" (UID: "7bbcc8dc-de13-48cd-8440-9df9da0f7cd3"). InnerVolumeSpecName "kube-api-access-dj4gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.809618 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" (UID: "7bbcc8dc-de13-48cd-8440-9df9da0f7cd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.860990 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.861035 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj4gg\" (UniqueName: \"kubernetes.io/projected/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-kube-api-access-dj4gg\") on node \"crc\" DevicePath \"\"" Nov 21 14:18:05 crc kubenswrapper[4904]: I1121 14:18:05.861052 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.112435 4904 generic.go:334] "Generic (PLEG): container finished" podID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerID="8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18" exitCode=0 Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.112498 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerDied","Data":"8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18"} Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.112514 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gksdb" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.112545 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gksdb" event={"ID":"7bbcc8dc-de13-48cd-8440-9df9da0f7cd3","Type":"ContainerDied","Data":"6abc4c36f3b2a0b22aab028e6704c6f154a8b6c2f6ff7d9212535e89036e5d60"} Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.112567 4904 scope.go:117] "RemoveContainer" containerID="8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.156567 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gksdb"] Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.159485 4904 scope.go:117] "RemoveContainer" containerID="adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.167070 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gksdb"] Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.191559 4904 scope.go:117] "RemoveContainer" containerID="c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.245981 4904 scope.go:117] "RemoveContainer" containerID="8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18" Nov 21 14:18:06 crc kubenswrapper[4904]: E1121 14:18:06.246764 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18\": container with ID starting with 8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18 not found: ID does not exist" containerID="8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.246821 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18"} err="failed to get container status \"8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18\": rpc error: code = NotFound desc = could not find container \"8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18\": container with ID starting with 8c2821668257f63bf4e14653e61a53acf02ddb2c75f943bd5866b7cecfcbad18 not found: ID does not exist" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.246856 4904 scope.go:117] "RemoveContainer" containerID="adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5" Nov 21 14:18:06 crc kubenswrapper[4904]: E1121 14:18:06.247611 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5\": container with ID starting with adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5 not found: ID does not exist" containerID="adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.247817 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5"} err="failed to get container status \"adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5\": rpc error: code = NotFound desc = could not find container \"adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5\": container with ID starting with adef553ac84639179010831777d18c5f3c3d4b7730db154cdf7ea9a5dfd61bc5 not found: ID does not exist" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.247872 4904 scope.go:117] "RemoveContainer" containerID="c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5" Nov 21 14:18:06 crc kubenswrapper[4904]: E1121 14:18:06.248481 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5\": container with ID starting with c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5 not found: ID does not exist" containerID="c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.248552 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5"} err="failed to get container status \"c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5\": rpc error: code = NotFound desc = could not find container \"c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5\": container with ID starting with c0d8f59f121d12e3a6201f431ddf9b4fd7c11f55066d442a528e7f71cdea0ed5 not found: ID does not exist" Nov 21 14:18:06 crc kubenswrapper[4904]: I1121 14:18:06.533366 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" path="/var/lib/kubelet/pods/7bbcc8dc-de13-48cd-8440-9df9da0f7cd3/volumes" Nov 21 14:18:14 crc kubenswrapper[4904]: I1121 14:18:14.351953 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:18:14 crc kubenswrapper[4904]: I1121 14:18:14.417862 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:18:14 crc kubenswrapper[4904]: I1121 14:18:14.606684 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92jxx"] Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.291230 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92jxx" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="registry-server" containerID="cri-o://255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862" gracePeriod=2 Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.813033 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.948910 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-utilities\") pod \"e9a59072-07ea-4ac8-b7ca-216819ef6530\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.948961 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-catalog-content\") pod \"e9a59072-07ea-4ac8-b7ca-216819ef6530\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.949218 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn4xk\" (UniqueName: \"kubernetes.io/projected/e9a59072-07ea-4ac8-b7ca-216819ef6530-kube-api-access-qn4xk\") pod \"e9a59072-07ea-4ac8-b7ca-216819ef6530\" (UID: \"e9a59072-07ea-4ac8-b7ca-216819ef6530\") " Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.949887 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-utilities" (OuterVolumeSpecName: "utilities") pod "e9a59072-07ea-4ac8-b7ca-216819ef6530" (UID: "e9a59072-07ea-4ac8-b7ca-216819ef6530"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.950863 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:18:16 crc kubenswrapper[4904]: I1121 14:18:16.978739 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9a59072-07ea-4ac8-b7ca-216819ef6530-kube-api-access-qn4xk" (OuterVolumeSpecName: "kube-api-access-qn4xk") pod "e9a59072-07ea-4ac8-b7ca-216819ef6530" (UID: "e9a59072-07ea-4ac8-b7ca-216819ef6530"). InnerVolumeSpecName "kube-api-access-qn4xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.014773 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9a59072-07ea-4ac8-b7ca-216819ef6530" (UID: "e9a59072-07ea-4ac8-b7ca-216819ef6530"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.053315 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a59072-07ea-4ac8-b7ca-216819ef6530-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.053361 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn4xk\" (UniqueName: \"kubernetes.io/projected/e9a59072-07ea-4ac8-b7ca-216819ef6530-kube-api-access-qn4xk\") on node \"crc\" DevicePath \"\"" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.307817 4904 generic.go:334] "Generic (PLEG): container finished" podID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerID="255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862" exitCode=0 Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.307905 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerDied","Data":"255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862"} Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.307941 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92jxx" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.308001 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92jxx" event={"ID":"e9a59072-07ea-4ac8-b7ca-216819ef6530","Type":"ContainerDied","Data":"6f00d579af4348ef77c5ba8ca6fc26f240325e8f3877c6a6a99743f2ddd57f98"} Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.308040 4904 scope.go:117] "RemoveContainer" containerID="255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.332059 4904 scope.go:117] "RemoveContainer" containerID="e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.353146 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92jxx"] Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.378507 4904 scope.go:117] "RemoveContainer" containerID="1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.380962 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92jxx"] Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.446675 4904 scope.go:117] "RemoveContainer" containerID="255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862" Nov 21 14:18:17 crc kubenswrapper[4904]: E1121 14:18:17.447235 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862\": container with ID starting with 255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862 not found: ID does not exist" containerID="255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.447308 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862"} err="failed to get container status \"255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862\": rpc error: code = NotFound desc = could not find container \"255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862\": container with ID starting with 255c7369e5fa0909aaa54ab53b33b0ca474de624d050f2a79fdf1ec58f30b862 not found: ID does not exist" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.447339 4904 scope.go:117] "RemoveContainer" containerID="e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b" Nov 21 14:18:17 crc kubenswrapper[4904]: E1121 14:18:17.447857 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b\": container with ID starting with e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b not found: ID does not exist" containerID="e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.447886 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b"} err="failed to get container status \"e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b\": rpc error: code = NotFound desc = could not find container \"e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b\": container with ID starting with e651155a2f3c8374a708c21b011a721d5ef90cc678587e04560dfb7c10e0982b not found: ID does not exist" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.447899 4904 scope.go:117] "RemoveContainer" containerID="1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c" Nov 21 14:18:17 crc kubenswrapper[4904]: E1121 14:18:17.448227 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c\": container with ID starting with 1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c not found: ID does not exist" containerID="1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c" Nov 21 14:18:17 crc kubenswrapper[4904]: I1121 14:18:17.448253 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c"} err="failed to get container status \"1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c\": rpc error: code = NotFound desc = could not find container \"1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c\": container with ID starting with 1468970cf1e6c583d85d3ebd0aa7a1b5e47e04e91038e0999a89495e0df9d78c not found: ID does not exist" Nov 21 14:18:18 crc kubenswrapper[4904]: I1121 14:18:18.525996 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" path="/var/lib/kubelet/pods/e9a59072-07ea-4ac8-b7ca-216819ef6530/volumes" Nov 21 14:19:28 crc kubenswrapper[4904]: I1121 14:19:28.113612 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:19:28 crc kubenswrapper[4904]: I1121 14:19:28.114585 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:19:58 crc kubenswrapper[4904]: I1121 14:19:58.114298 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:19:58 crc kubenswrapper[4904]: I1121 14:19:58.115081 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.113777 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.114398 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.114463 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.115640 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.115723 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" gracePeriod=600 Nov 21 14:20:28 crc kubenswrapper[4904]: E1121 14:20:28.252308 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.972478 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" exitCode=0 Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.972541 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d"} Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.973025 4904 scope.go:117] "RemoveContainer" containerID="b4ff6c2748eb1b91f361617b938e7a87dd68f8ac136364d295e380f97e4b020c" Nov 21 14:20:28 crc kubenswrapper[4904]: I1121 14:20:28.974011 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:20:28 crc kubenswrapper[4904]: E1121 14:20:28.974597 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:20:42 crc kubenswrapper[4904]: I1121 14:20:42.109922 4904 generic.go:334] "Generic (PLEG): container finished" podID="7fc13b06-871e-4313-a9fd-2d87ba4347da" containerID="ae6bce2a2ebc262a61d1e6d7c46787a1b8eb268af1907c1ec71095ad2538c1cb" exitCode=0 Nov 21 14:20:42 crc kubenswrapper[4904]: I1121 14:20:42.111045 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" event={"ID":"7fc13b06-871e-4313-a9fd-2d87ba4347da","Type":"ContainerDied","Data":"ae6bce2a2ebc262a61d1e6d7c46787a1b8eb268af1907c1ec71095ad2538c1cb"} Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.513159 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:20:43 crc kubenswrapper[4904]: E1121 14:20:43.513860 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.742076 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.903982 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-inventory\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.904905 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-2\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.904968 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ssh-key\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.905028 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-telemetry-combined-ca-bundle\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.905077 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-0\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.905164 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-1\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.905221 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqdkf\" (UniqueName: \"kubernetes.io/projected/7fc13b06-871e-4313-a9fd-2d87ba4347da-kube-api-access-mqdkf\") pod \"7fc13b06-871e-4313-a9fd-2d87ba4347da\" (UID: \"7fc13b06-871e-4313-a9fd-2d87ba4347da\") " Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.913496 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc13b06-871e-4313-a9fd-2d87ba4347da-kube-api-access-mqdkf" (OuterVolumeSpecName: "kube-api-access-mqdkf") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "kube-api-access-mqdkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.919106 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.943896 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.945482 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.955856 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:43 crc kubenswrapper[4904]: I1121 14:20:43.967764 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-inventory" (OuterVolumeSpecName: "inventory") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.007511 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.007560 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.007573 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.007582 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.007592 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqdkf\" (UniqueName: \"kubernetes.io/projected/7fc13b06-871e-4313-a9fd-2d87ba4347da-kube-api-access-mqdkf\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.007603 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.141385 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" event={"ID":"7fc13b06-871e-4313-a9fd-2d87ba4347da","Type":"ContainerDied","Data":"533a35c2c018fc026dea9cefb47cfb3f6d66d643d7475766d85887193c3423ba"} Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.141438 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533a35c2c018fc026dea9cefb47cfb3f6d66d643d7475766d85887193c3423ba" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.141524 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.199136 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7fc13b06-871e-4313-a9fd-2d87ba4347da" (UID: "7fc13b06-871e-4313-a9fd-2d87ba4347da"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.213214 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7fc13b06-871e-4313-a9fd-2d87ba4347da-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.258883 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b"] Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259509 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="extract-content" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259528 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="extract-content" Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259546 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="extract-content" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259553 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="extract-content" Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259574 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="registry-server" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259581 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="registry-server" Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259592 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="registry-server" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259598 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="registry-server" Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259616 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="extract-utilities" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259622 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="extract-utilities" Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259640 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc13b06-871e-4313-a9fd-2d87ba4347da" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259664 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc13b06-871e-4313-a9fd-2d87ba4347da" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:20:44 crc kubenswrapper[4904]: E1121 14:20:44.259682 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="extract-utilities" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259688 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="extract-utilities" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259908 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9a59072-07ea-4ac8-b7ca-216819ef6530" containerName="registry-server" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259931 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbcc8dc-de13-48cd-8440-9df9da0f7cd3" containerName="registry-server" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.259943 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc13b06-871e-4313-a9fd-2d87ba4347da" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.261099 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.263887 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.287006 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b"] Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.418842 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.418942 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.418992 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.419021 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.419100 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.419182 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.419287 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65cd2\" (UniqueName: \"kubernetes.io/projected/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-kube-api-access-65cd2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.520819 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.520940 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.521037 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65cd2\" (UniqueName: \"kubernetes.io/projected/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-kube-api-access-65cd2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.521073 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.521125 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.521165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.521193 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.528493 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.529326 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.530568 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.531348 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.531970 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.532522 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.548527 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65cd2\" (UniqueName: \"kubernetes.io/projected/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-kube-api-access-65cd2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:44 crc kubenswrapper[4904]: I1121 14:20:44.591017 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:20:46 crc kubenswrapper[4904]: I1121 14:20:46.073183 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b"] Nov 21 14:20:46 crc kubenswrapper[4904]: I1121 14:20:46.119712 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:20:46 crc kubenswrapper[4904]: I1121 14:20:46.167026 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" event={"ID":"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b","Type":"ContainerStarted","Data":"8dfe1c761872f9ed08e33648154e64962c3f5586a712bbf1c9fb258c0258b03a"} Nov 21 14:20:47 crc kubenswrapper[4904]: I1121 14:20:47.184593 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" event={"ID":"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b","Type":"ContainerStarted","Data":"87feb1c6987477a5e8445b56f03a655fd068c0981b3371ae2e48934cba478438"} Nov 21 14:20:47 crc kubenswrapper[4904]: I1121 14:20:47.214713 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" podStartSLOduration=2.736549686 podStartE2EDuration="3.214686459s" podCreationTimestamp="2025-11-21 14:20:44 +0000 UTC" firstStartedPulling="2025-11-21 14:20:46.119297155 +0000 UTC m=+2920.240829707" lastFinishedPulling="2025-11-21 14:20:46.597433928 +0000 UTC m=+2920.718966480" observedRunningTime="2025-11-21 14:20:47.207346709 +0000 UTC m=+2921.328879261" watchObservedRunningTime="2025-11-21 14:20:47.214686459 +0000 UTC m=+2921.336219011" Nov 21 14:20:57 crc kubenswrapper[4904]: I1121 14:20:57.513347 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:20:57 crc kubenswrapper[4904]: E1121 14:20:57.514276 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:21:08 crc kubenswrapper[4904]: I1121 14:21:08.514596 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:21:08 crc kubenswrapper[4904]: E1121 14:21:08.516183 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:21:23 crc kubenswrapper[4904]: I1121 14:21:23.513970 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:21:23 crc kubenswrapper[4904]: E1121 14:21:23.515073 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:21:34 crc kubenswrapper[4904]: I1121 14:21:34.513713 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:21:34 crc kubenswrapper[4904]: E1121 14:21:34.514538 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:21:48 crc kubenswrapper[4904]: I1121 14:21:48.515785 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:21:48 crc kubenswrapper[4904]: E1121 14:21:48.516846 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:22:01 crc kubenswrapper[4904]: I1121 14:22:01.513525 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:22:01 crc kubenswrapper[4904]: E1121 14:22:01.515738 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:22:16 crc kubenswrapper[4904]: I1121 14:22:16.521018 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:22:16 crc kubenswrapper[4904]: E1121 14:22:16.522271 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:22:31 crc kubenswrapper[4904]: I1121 14:22:31.515177 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:22:31 crc kubenswrapper[4904]: E1121 14:22:31.516601 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:22:45 crc kubenswrapper[4904]: I1121 14:22:45.514121 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:22:45 crc kubenswrapper[4904]: E1121 14:22:45.515672 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:22:56 crc kubenswrapper[4904]: I1121 14:22:56.523406 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:22:56 crc kubenswrapper[4904]: E1121 14:22:56.524860 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:23:10 crc kubenswrapper[4904]: I1121 14:23:10.513600 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:23:10 crc kubenswrapper[4904]: E1121 14:23:10.514723 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:23:24 crc kubenswrapper[4904]: I1121 14:23:24.515962 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:23:24 crc kubenswrapper[4904]: E1121 14:23:24.517524 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:23:38 crc kubenswrapper[4904]: I1121 14:23:38.515986 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:23:38 crc kubenswrapper[4904]: E1121 14:23:38.517430 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:23:50 crc kubenswrapper[4904]: I1121 14:23:50.514507 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:23:50 crc kubenswrapper[4904]: E1121 14:23:50.515735 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:23:54 crc kubenswrapper[4904]: I1121 14:23:54.523325 4904 generic.go:334] "Generic (PLEG): container finished" podID="7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" containerID="87feb1c6987477a5e8445b56f03a655fd068c0981b3371ae2e48934cba478438" exitCode=0 Nov 21 14:23:54 crc kubenswrapper[4904]: I1121 14:23:54.524166 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" event={"ID":"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b","Type":"ContainerDied","Data":"87feb1c6987477a5e8445b56f03a655fd068c0981b3371ae2e48934cba478438"} Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.132671 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.194622 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-0\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.194863 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-inventory\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.194989 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-telemetry-power-monitoring-combined-ca-bundle\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.195041 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-2\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.195169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ssh-key\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.195244 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65cd2\" (UniqueName: \"kubernetes.io/projected/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-kube-api-access-65cd2\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.195283 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-1\") pod \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\" (UID: \"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b\") " Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.204380 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-kube-api-access-65cd2" (OuterVolumeSpecName: "kube-api-access-65cd2") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "kube-api-access-65cd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.204617 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.237332 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.242458 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-inventory" (OuterVolumeSpecName: "inventory") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.242448 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.260596 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.264070 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" (UID: "7b6e5c0f-f6b3-4843-9075-a069ffcadb7b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298630 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65cd2\" (UniqueName: \"kubernetes.io/projected/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-kube-api-access-65cd2\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298694 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298710 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298721 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298734 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298746 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.298776 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.558799 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" event={"ID":"7b6e5c0f-f6b3-4843-9075-a069ffcadb7b","Type":"ContainerDied","Data":"8dfe1c761872f9ed08e33648154e64962c3f5586a712bbf1c9fb258c0258b03a"} Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.558859 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dfe1c761872f9ed08e33648154e64962c3f5586a712bbf1c9fb258c0258b03a" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.558863 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.663645 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6"] Nov 21 14:23:56 crc kubenswrapper[4904]: E1121 14:23:56.664898 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.664927 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.665261 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.666300 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.670352 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.670614 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.670667 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.670747 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.670752 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.679327 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6"] Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.810438 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.810999 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96cj6\" (UniqueName: \"kubernetes.io/projected/28761749-49f4-46db-8542-f82d6fb532c7-kube-api-access-96cj6\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.811043 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.811228 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.811272 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.913383 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.913473 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96cj6\" (UniqueName: \"kubernetes.io/projected/28761749-49f4-46db-8542-f82d6fb532c7-kube-api-access-96cj6\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.913508 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.913634 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.913690 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.920169 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.920930 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.920984 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.925262 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.939758 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96cj6\" (UniqueName: \"kubernetes.io/projected/28761749-49f4-46db-8542-f82d6fb532c7-kube-api-access-96cj6\") pod \"logging-edpm-deployment-openstack-edpm-ipam-b42j6\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:56 crc kubenswrapper[4904]: I1121 14:23:56.985465 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:23:57 crc kubenswrapper[4904]: I1121 14:23:57.620353 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6"] Nov 21 14:23:58 crc kubenswrapper[4904]: I1121 14:23:58.591459 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" event={"ID":"28761749-49f4-46db-8542-f82d6fb532c7","Type":"ContainerStarted","Data":"ff416c708d01e7cda3365d9215f9088aeebb24f7eb09b0dd9da719457094dfb2"} Nov 21 14:23:59 crc kubenswrapper[4904]: I1121 14:23:59.619060 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" event={"ID":"28761749-49f4-46db-8542-f82d6fb532c7","Type":"ContainerStarted","Data":"8bfa9d930d7572ee36a68be41f3595d1cfa675d1d959a77b900638e9e41d0dfd"} Nov 21 14:23:59 crc kubenswrapper[4904]: I1121 14:23:59.641199 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" podStartSLOduration=2.848666523 podStartE2EDuration="3.641164265s" podCreationTimestamp="2025-11-21 14:23:56 +0000 UTC" firstStartedPulling="2025-11-21 14:23:57.627994689 +0000 UTC m=+3111.749527241" lastFinishedPulling="2025-11-21 14:23:58.420492411 +0000 UTC m=+3112.542024983" observedRunningTime="2025-11-21 14:23:59.63849513 +0000 UTC m=+3113.760027692" watchObservedRunningTime="2025-11-21 14:23:59.641164265 +0000 UTC m=+3113.762696847" Nov 21 14:24:04 crc kubenswrapper[4904]: I1121 14:24:04.514688 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:24:04 crc kubenswrapper[4904]: E1121 14:24:04.515910 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:24:19 crc kubenswrapper[4904]: I1121 14:24:19.515387 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:24:19 crc kubenswrapper[4904]: E1121 14:24:19.516568 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:24:20 crc kubenswrapper[4904]: I1121 14:24:20.865931 4904 generic.go:334] "Generic (PLEG): container finished" podID="28761749-49f4-46db-8542-f82d6fb532c7" containerID="8bfa9d930d7572ee36a68be41f3595d1cfa675d1d959a77b900638e9e41d0dfd" exitCode=0 Nov 21 14:24:20 crc kubenswrapper[4904]: I1121 14:24:20.866013 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" event={"ID":"28761749-49f4-46db-8542-f82d6fb532c7","Type":"ContainerDied","Data":"8bfa9d930d7572ee36a68be41f3595d1cfa675d1d959a77b900638e9e41d0dfd"} Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.386017 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.489202 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-1\") pod \"28761749-49f4-46db-8542-f82d6fb532c7\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.489362 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-ssh-key\") pod \"28761749-49f4-46db-8542-f82d6fb532c7\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.489604 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-inventory\") pod \"28761749-49f4-46db-8542-f82d6fb532c7\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.489629 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96cj6\" (UniqueName: \"kubernetes.io/projected/28761749-49f4-46db-8542-f82d6fb532c7-kube-api-access-96cj6\") pod \"28761749-49f4-46db-8542-f82d6fb532c7\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.489712 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-0\") pod \"28761749-49f4-46db-8542-f82d6fb532c7\" (UID: \"28761749-49f4-46db-8542-f82d6fb532c7\") " Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.507763 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28761749-49f4-46db-8542-f82d6fb532c7-kube-api-access-96cj6" (OuterVolumeSpecName: "kube-api-access-96cj6") pod "28761749-49f4-46db-8542-f82d6fb532c7" (UID: "28761749-49f4-46db-8542-f82d6fb532c7"). InnerVolumeSpecName "kube-api-access-96cj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.526581 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "28761749-49f4-46db-8542-f82d6fb532c7" (UID: "28761749-49f4-46db-8542-f82d6fb532c7"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.526625 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-inventory" (OuterVolumeSpecName: "inventory") pod "28761749-49f4-46db-8542-f82d6fb532c7" (UID: "28761749-49f4-46db-8542-f82d6fb532c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.534855 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "28761749-49f4-46db-8542-f82d6fb532c7" (UID: "28761749-49f4-46db-8542-f82d6fb532c7"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.536570 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "28761749-49f4-46db-8542-f82d6fb532c7" (UID: "28761749-49f4-46db-8542-f82d6fb532c7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.592776 4904 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.592822 4904 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.592840 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.592852 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28761749-49f4-46db-8542-f82d6fb532c7-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.592862 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96cj6\" (UniqueName: \"kubernetes.io/projected/28761749-49f4-46db-8542-f82d6fb532c7-kube-api-access-96cj6\") on node \"crc\" DevicePath \"\"" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.892812 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" event={"ID":"28761749-49f4-46db-8542-f82d6fb532c7","Type":"ContainerDied","Data":"ff416c708d01e7cda3365d9215f9088aeebb24f7eb09b0dd9da719457094dfb2"} Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.893174 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff416c708d01e7cda3365d9215f9088aeebb24f7eb09b0dd9da719457094dfb2" Nov 21 14:24:22 crc kubenswrapper[4904]: I1121 14:24:22.892879 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6" Nov 21 14:24:31 crc kubenswrapper[4904]: I1121 14:24:31.513354 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:24:31 crc kubenswrapper[4904]: E1121 14:24:31.514861 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:24:42 crc kubenswrapper[4904]: I1121 14:24:42.513578 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:24:42 crc kubenswrapper[4904]: E1121 14:24:42.516635 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:24:54 crc kubenswrapper[4904]: I1121 14:24:54.513075 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:24:54 crc kubenswrapper[4904]: E1121 14:24:54.513997 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.051344 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m8qw9"] Nov 21 14:24:56 crc kubenswrapper[4904]: E1121 14:24:56.052394 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28761749-49f4-46db-8542-f82d6fb532c7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.052413 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="28761749-49f4-46db-8542-f82d6fb532c7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.052662 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="28761749-49f4-46db-8542-f82d6fb532c7" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.054498 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.063403 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8qw9"] Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.174228 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-utilities\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.174388 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-catalog-content\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.174418 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl64k\" (UniqueName: \"kubernetes.io/projected/2100836b-4b7c-47fa-8d1c-b80352155b5b-kube-api-access-gl64k\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.277400 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-utilities\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.277528 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-catalog-content\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.277548 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl64k\" (UniqueName: \"kubernetes.io/projected/2100836b-4b7c-47fa-8d1c-b80352155b5b-kube-api-access-gl64k\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.278484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-catalog-content\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.278635 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-utilities\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.303234 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl64k\" (UniqueName: \"kubernetes.io/projected/2100836b-4b7c-47fa-8d1c-b80352155b5b-kube-api-access-gl64k\") pod \"redhat-marketplace-m8qw9\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.390804 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:24:56 crc kubenswrapper[4904]: I1121 14:24:56.982681 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8qw9"] Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.302185 4904 generic.go:334] "Generic (PLEG): container finished" podID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerID="fcb1dd3d0809e3fecc9e0930752edee71c7f784eff1f0c99233cbeffc74c5cde" exitCode=0 Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.302233 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8qw9" event={"ID":"2100836b-4b7c-47fa-8d1c-b80352155b5b","Type":"ContainerDied","Data":"fcb1dd3d0809e3fecc9e0930752edee71c7f784eff1f0c99233cbeffc74c5cde"} Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.302709 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8qw9" event={"ID":"2100836b-4b7c-47fa-8d1c-b80352155b5b","Type":"ContainerStarted","Data":"e6c8f605238d85bacc14b17ca8593327643546649dd725e4225c654ba599b642"} Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.854017 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7jdwn"] Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.857606 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.867056 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jdwn"] Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.941093 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-catalog-content\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.941186 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-utilities\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:57 crc kubenswrapper[4904]: I1121 14:24:57.941450 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwtjv\" (UniqueName: \"kubernetes.io/projected/64bb1902-324a-4df6-a068-5919e06859e1-kube-api-access-dwtjv\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.044256 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-catalog-content\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.044353 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-utilities\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.044539 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwtjv\" (UniqueName: \"kubernetes.io/projected/64bb1902-324a-4df6-a068-5919e06859e1-kube-api-access-dwtjv\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.044866 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-catalog-content\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.045160 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-utilities\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.068415 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwtjv\" (UniqueName: \"kubernetes.io/projected/64bb1902-324a-4df6-a068-5919e06859e1-kube-api-access-dwtjv\") pod \"redhat-operators-7jdwn\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.193012 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:24:58 crc kubenswrapper[4904]: W1121 14:24:58.709993 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64bb1902_324a_4df6_a068_5919e06859e1.slice/crio-ddc8f5be6f87fcf5cb38be8e2c7d819f38951174d3185854d9ed62ec75c75414 WatchSource:0}: Error finding container ddc8f5be6f87fcf5cb38be8e2c7d819f38951174d3185854d9ed62ec75c75414: Status 404 returned error can't find the container with id ddc8f5be6f87fcf5cb38be8e2c7d819f38951174d3185854d9ed62ec75c75414 Nov 21 14:24:58 crc kubenswrapper[4904]: I1121 14:24:58.714419 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7jdwn"] Nov 21 14:24:59 crc kubenswrapper[4904]: I1121 14:24:59.327931 4904 generic.go:334] "Generic (PLEG): container finished" podID="64bb1902-324a-4df6-a068-5919e06859e1" containerID="916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94" exitCode=0 Nov 21 14:24:59 crc kubenswrapper[4904]: I1121 14:24:59.328408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerDied","Data":"916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94"} Nov 21 14:24:59 crc kubenswrapper[4904]: I1121 14:24:59.328446 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerStarted","Data":"ddc8f5be6f87fcf5cb38be8e2c7d819f38951174d3185854d9ed62ec75c75414"} Nov 21 14:24:59 crc kubenswrapper[4904]: I1121 14:24:59.333320 4904 generic.go:334] "Generic (PLEG): container finished" podID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerID="4b8d1f852c95349cb53a103e3b9c7dcef420c5fb98f66c57f07b655b14db5e9b" exitCode=0 Nov 21 14:24:59 crc kubenswrapper[4904]: I1121 14:24:59.333408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8qw9" event={"ID":"2100836b-4b7c-47fa-8d1c-b80352155b5b","Type":"ContainerDied","Data":"4b8d1f852c95349cb53a103e3b9c7dcef420c5fb98f66c57f07b655b14db5e9b"} Nov 21 14:25:00 crc kubenswrapper[4904]: I1121 14:25:00.371429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8qw9" event={"ID":"2100836b-4b7c-47fa-8d1c-b80352155b5b","Type":"ContainerStarted","Data":"75e6304378012d935d8790f96d2f3c73f40b74f53a23f465a455bb68eb576aca"} Nov 21 14:25:00 crc kubenswrapper[4904]: I1121 14:25:00.404534 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m8qw9" podStartSLOduration=1.9945594450000002 podStartE2EDuration="4.404508013s" podCreationTimestamp="2025-11-21 14:24:56 +0000 UTC" firstStartedPulling="2025-11-21 14:24:57.305326104 +0000 UTC m=+3171.426858656" lastFinishedPulling="2025-11-21 14:24:59.715274642 +0000 UTC m=+3173.836807224" observedRunningTime="2025-11-21 14:25:00.40025774 +0000 UTC m=+3174.521790302" watchObservedRunningTime="2025-11-21 14:25:00.404508013 +0000 UTC m=+3174.526040565" Nov 21 14:25:01 crc kubenswrapper[4904]: I1121 14:25:01.395015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerStarted","Data":"876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93"} Nov 21 14:25:02 crc kubenswrapper[4904]: I1121 14:25:02.407943 4904 generic.go:334] "Generic (PLEG): container finished" podID="64bb1902-324a-4df6-a068-5919e06859e1" containerID="876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93" exitCode=0 Nov 21 14:25:02 crc kubenswrapper[4904]: I1121 14:25:02.408058 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerDied","Data":"876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93"} Nov 21 14:25:03 crc kubenswrapper[4904]: I1121 14:25:03.425468 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerStarted","Data":"a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176"} Nov 21 14:25:03 crc kubenswrapper[4904]: I1121 14:25:03.449130 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7jdwn" podStartSLOduration=2.98213487 podStartE2EDuration="6.449098621s" podCreationTimestamp="2025-11-21 14:24:57 +0000 UTC" firstStartedPulling="2025-11-21 14:24:59.330524729 +0000 UTC m=+3173.452057281" lastFinishedPulling="2025-11-21 14:25:02.79748848 +0000 UTC m=+3176.919021032" observedRunningTime="2025-11-21 14:25:03.446620641 +0000 UTC m=+3177.568153203" watchObservedRunningTime="2025-11-21 14:25:03.449098621 +0000 UTC m=+3177.570631183" Nov 21 14:25:06 crc kubenswrapper[4904]: I1121 14:25:06.392001 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:25:06 crc kubenswrapper[4904]: I1121 14:25:06.392976 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:25:06 crc kubenswrapper[4904]: I1121 14:25:06.457115 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:25:06 crc kubenswrapper[4904]: I1121 14:25:06.518219 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:25:06 crc kubenswrapper[4904]: E1121 14:25:06.518560 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:25:06 crc kubenswrapper[4904]: I1121 14:25:06.529951 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:25:08 crc kubenswrapper[4904]: I1121 14:25:08.193780 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:25:08 crc kubenswrapper[4904]: I1121 14:25:08.197343 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:25:08 crc kubenswrapper[4904]: I1121 14:25:08.830853 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8qw9"] Nov 21 14:25:08 crc kubenswrapper[4904]: I1121 14:25:08.831148 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m8qw9" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="registry-server" containerID="cri-o://75e6304378012d935d8790f96d2f3c73f40b74f53a23f465a455bb68eb576aca" gracePeriod=2 Nov 21 14:25:09 crc kubenswrapper[4904]: I1121 14:25:09.248844 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7jdwn" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="registry-server" probeResult="failure" output=< Nov 21 14:25:09 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:25:09 crc kubenswrapper[4904]: > Nov 21 14:25:11 crc kubenswrapper[4904]: I1121 14:25:11.526291 4904 generic.go:334] "Generic (PLEG): container finished" podID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerID="75e6304378012d935d8790f96d2f3c73f40b74f53a23f465a455bb68eb576aca" exitCode=0 Nov 21 14:25:11 crc kubenswrapper[4904]: I1121 14:25:11.526387 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8qw9" event={"ID":"2100836b-4b7c-47fa-8d1c-b80352155b5b","Type":"ContainerDied","Data":"75e6304378012d935d8790f96d2f3c73f40b74f53a23f465a455bb68eb576aca"} Nov 21 14:25:11 crc kubenswrapper[4904]: I1121 14:25:11.900733 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.008913 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-utilities\") pod \"2100836b-4b7c-47fa-8d1c-b80352155b5b\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.009208 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl64k\" (UniqueName: \"kubernetes.io/projected/2100836b-4b7c-47fa-8d1c-b80352155b5b-kube-api-access-gl64k\") pod \"2100836b-4b7c-47fa-8d1c-b80352155b5b\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.009238 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-catalog-content\") pod \"2100836b-4b7c-47fa-8d1c-b80352155b5b\" (UID: \"2100836b-4b7c-47fa-8d1c-b80352155b5b\") " Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.009799 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-utilities" (OuterVolumeSpecName: "utilities") pod "2100836b-4b7c-47fa-8d1c-b80352155b5b" (UID: "2100836b-4b7c-47fa-8d1c-b80352155b5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.017418 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2100836b-4b7c-47fa-8d1c-b80352155b5b-kube-api-access-gl64k" (OuterVolumeSpecName: "kube-api-access-gl64k") pod "2100836b-4b7c-47fa-8d1c-b80352155b5b" (UID: "2100836b-4b7c-47fa-8d1c-b80352155b5b"). InnerVolumeSpecName "kube-api-access-gl64k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.033190 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2100836b-4b7c-47fa-8d1c-b80352155b5b" (UID: "2100836b-4b7c-47fa-8d1c-b80352155b5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.112775 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl64k\" (UniqueName: \"kubernetes.io/projected/2100836b-4b7c-47fa-8d1c-b80352155b5b-kube-api-access-gl64k\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.113153 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.113172 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2100836b-4b7c-47fa-8d1c-b80352155b5b-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.542555 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m8qw9" event={"ID":"2100836b-4b7c-47fa-8d1c-b80352155b5b","Type":"ContainerDied","Data":"e6c8f605238d85bacc14b17ca8593327643546649dd725e4225c654ba599b642"} Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.543136 4904 scope.go:117] "RemoveContainer" containerID="75e6304378012d935d8790f96d2f3c73f40b74f53a23f465a455bb68eb576aca" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.542629 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m8qw9" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.580744 4904 scope.go:117] "RemoveContainer" containerID="4b8d1f852c95349cb53a103e3b9c7dcef420c5fb98f66c57f07b655b14db5e9b" Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.583283 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8qw9"] Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.603407 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m8qw9"] Nov 21 14:25:12 crc kubenswrapper[4904]: I1121 14:25:12.613597 4904 scope.go:117] "RemoveContainer" containerID="fcb1dd3d0809e3fecc9e0930752edee71c7f784eff1f0c99233cbeffc74c5cde" Nov 21 14:25:14 crc kubenswrapper[4904]: I1121 14:25:14.530245 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" path="/var/lib/kubelet/pods/2100836b-4b7c-47fa-8d1c-b80352155b5b/volumes" Nov 21 14:25:18 crc kubenswrapper[4904]: I1121 14:25:18.259260 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:25:18 crc kubenswrapper[4904]: I1121 14:25:18.322615 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:25:19 crc kubenswrapper[4904]: I1121 14:25:19.512406 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jdwn"] Nov 21 14:25:19 crc kubenswrapper[4904]: I1121 14:25:19.514477 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:25:19 crc kubenswrapper[4904]: E1121 14:25:19.514779 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:25:19 crc kubenswrapper[4904]: I1121 14:25:19.635415 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7jdwn" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="registry-server" containerID="cri-o://a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176" gracePeriod=2 Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.200040 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.317887 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-utilities\") pod \"64bb1902-324a-4df6-a068-5919e06859e1\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.317994 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-catalog-content\") pod \"64bb1902-324a-4df6-a068-5919e06859e1\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.318093 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwtjv\" (UniqueName: \"kubernetes.io/projected/64bb1902-324a-4df6-a068-5919e06859e1-kube-api-access-dwtjv\") pod \"64bb1902-324a-4df6-a068-5919e06859e1\" (UID: \"64bb1902-324a-4df6-a068-5919e06859e1\") " Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.319945 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-utilities" (OuterVolumeSpecName: "utilities") pod "64bb1902-324a-4df6-a068-5919e06859e1" (UID: "64bb1902-324a-4df6-a068-5919e06859e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.326253 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bb1902-324a-4df6-a068-5919e06859e1-kube-api-access-dwtjv" (OuterVolumeSpecName: "kube-api-access-dwtjv") pod "64bb1902-324a-4df6-a068-5919e06859e1" (UID: "64bb1902-324a-4df6-a068-5919e06859e1"). InnerVolumeSpecName "kube-api-access-dwtjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.421442 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.421483 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwtjv\" (UniqueName: \"kubernetes.io/projected/64bb1902-324a-4df6-a068-5919e06859e1-kube-api-access-dwtjv\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.428582 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64bb1902-324a-4df6-a068-5919e06859e1" (UID: "64bb1902-324a-4df6-a068-5919e06859e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.523295 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64bb1902-324a-4df6-a068-5919e06859e1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.649646 4904 generic.go:334] "Generic (PLEG): container finished" podID="64bb1902-324a-4df6-a068-5919e06859e1" containerID="a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176" exitCode=0 Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.649732 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerDied","Data":"a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176"} Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.649769 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7jdwn" event={"ID":"64bb1902-324a-4df6-a068-5919e06859e1","Type":"ContainerDied","Data":"ddc8f5be6f87fcf5cb38be8e2c7d819f38951174d3185854d9ed62ec75c75414"} Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.649777 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7jdwn" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.649799 4904 scope.go:117] "RemoveContainer" containerID="a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.678805 4904 scope.go:117] "RemoveContainer" containerID="876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.683352 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7jdwn"] Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.693271 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7jdwn"] Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.709844 4904 scope.go:117] "RemoveContainer" containerID="916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.753719 4904 scope.go:117] "RemoveContainer" containerID="a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176" Nov 21 14:25:20 crc kubenswrapper[4904]: E1121 14:25:20.754437 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176\": container with ID starting with a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176 not found: ID does not exist" containerID="a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.754613 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176"} err="failed to get container status \"a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176\": rpc error: code = NotFound desc = could not find container \"a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176\": container with ID starting with a8f24afb587587d9e52ed878a8f5c695ebccc5c4f96d13070794d239d360b176 not found: ID does not exist" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.754776 4904 scope.go:117] "RemoveContainer" containerID="876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93" Nov 21 14:25:20 crc kubenswrapper[4904]: E1121 14:25:20.755349 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93\": container with ID starting with 876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93 not found: ID does not exist" containerID="876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.755451 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93"} err="failed to get container status \"876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93\": rpc error: code = NotFound desc = could not find container \"876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93\": container with ID starting with 876d6a85221990d964fea65f98540cc410e34115cb674e0422242a01ad219e93 not found: ID does not exist" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.755524 4904 scope.go:117] "RemoveContainer" containerID="916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94" Nov 21 14:25:20 crc kubenswrapper[4904]: E1121 14:25:20.755923 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94\": container with ID starting with 916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94 not found: ID does not exist" containerID="916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94" Nov 21 14:25:20 crc kubenswrapper[4904]: I1121 14:25:20.756012 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94"} err="failed to get container status \"916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94\": rpc error: code = NotFound desc = could not find container \"916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94\": container with ID starting with 916bc7069b2f61d96bae23e3aeddb85c634136ddb6cafc5b3c94088b0336de94 not found: ID does not exist" Nov 21 14:25:22 crc kubenswrapper[4904]: I1121 14:25:22.526878 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64bb1902-324a-4df6-a068-5919e06859e1" path="/var/lib/kubelet/pods/64bb1902-324a-4df6-a068-5919e06859e1/volumes" Nov 21 14:25:34 crc kubenswrapper[4904]: I1121 14:25:34.513303 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:25:35 crc kubenswrapper[4904]: I1121 14:25:35.857919 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"633af6aaae1d82356d12d2154eb872c925ee1ebbcb5264c806e81d86e839558f"} Nov 21 14:27:58 crc kubenswrapper[4904]: I1121 14:27:58.114024 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:27:58 crc kubenswrapper[4904]: I1121 14:27:58.114715 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.569832 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v579v"] Nov 21 14:28:12 crc kubenswrapper[4904]: E1121 14:28:12.571137 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="extract-content" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571153 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="extract-content" Nov 21 14:28:12 crc kubenswrapper[4904]: E1121 14:28:12.571167 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="extract-utilities" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571175 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="extract-utilities" Nov 21 14:28:12 crc kubenswrapper[4904]: E1121 14:28:12.571197 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="extract-utilities" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571203 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="extract-utilities" Nov 21 14:28:12 crc kubenswrapper[4904]: E1121 14:28:12.571213 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="registry-server" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571218 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="registry-server" Nov 21 14:28:12 crc kubenswrapper[4904]: E1121 14:28:12.571248 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="registry-server" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571254 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="registry-server" Nov 21 14:28:12 crc kubenswrapper[4904]: E1121 14:28:12.571262 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="extract-content" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571268 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="extract-content" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571480 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="64bb1902-324a-4df6-a068-5919e06859e1" containerName="registry-server" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.571499 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2100836b-4b7c-47fa-8d1c-b80352155b5b" containerName="registry-server" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.573098 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.595908 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v579v"] Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.653251 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-utilities\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.653387 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkl6d\" (UniqueName: \"kubernetes.io/projected/07cf9de1-90e1-4b7b-828f-c765466be191-kube-api-access-qkl6d\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.653484 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-catalog-content\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.756779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-utilities\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.756876 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkl6d\" (UniqueName: \"kubernetes.io/projected/07cf9de1-90e1-4b7b-828f-c765466be191-kube-api-access-qkl6d\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.756921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-catalog-content\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.758136 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-catalog-content\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.758185 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-utilities\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.782680 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkl6d\" (UniqueName: \"kubernetes.io/projected/07cf9de1-90e1-4b7b-828f-c765466be191-kube-api-access-qkl6d\") pod \"certified-operators-v579v\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:12 crc kubenswrapper[4904]: I1121 14:28:12.945408 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:13 crc kubenswrapper[4904]: I1121 14:28:13.513441 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v579v"] Nov 21 14:28:13 crc kubenswrapper[4904]: I1121 14:28:13.744044 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v579v" event={"ID":"07cf9de1-90e1-4b7b-828f-c765466be191","Type":"ContainerStarted","Data":"269ffd31458adbd6245a27d609742257268dddb50ff8a83c58b145126158cc61"} Nov 21 14:28:14 crc kubenswrapper[4904]: I1121 14:28:14.760953 4904 generic.go:334] "Generic (PLEG): container finished" podID="07cf9de1-90e1-4b7b-828f-c765466be191" containerID="1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4" exitCode=0 Nov 21 14:28:14 crc kubenswrapper[4904]: I1121 14:28:14.761028 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v579v" event={"ID":"07cf9de1-90e1-4b7b-828f-c765466be191","Type":"ContainerDied","Data":"1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4"} Nov 21 14:28:14 crc kubenswrapper[4904]: I1121 14:28:14.763683 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:28:16 crc kubenswrapper[4904]: I1121 14:28:16.786450 4904 generic.go:334] "Generic (PLEG): container finished" podID="07cf9de1-90e1-4b7b-828f-c765466be191" containerID="1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3" exitCode=0 Nov 21 14:28:16 crc kubenswrapper[4904]: I1121 14:28:16.786630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v579v" event={"ID":"07cf9de1-90e1-4b7b-828f-c765466be191","Type":"ContainerDied","Data":"1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3"} Nov 21 14:28:17 crc kubenswrapper[4904]: I1121 14:28:17.801963 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v579v" event={"ID":"07cf9de1-90e1-4b7b-828f-c765466be191","Type":"ContainerStarted","Data":"8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430"} Nov 21 14:28:17 crc kubenswrapper[4904]: I1121 14:28:17.826190 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v579v" podStartSLOduration=3.425790229 podStartE2EDuration="5.826163735s" podCreationTimestamp="2025-11-21 14:28:12 +0000 UTC" firstStartedPulling="2025-11-21 14:28:14.763382937 +0000 UTC m=+3368.884915489" lastFinishedPulling="2025-11-21 14:28:17.163756433 +0000 UTC m=+3371.285288995" observedRunningTime="2025-11-21 14:28:17.822969527 +0000 UTC m=+3371.944502099" watchObservedRunningTime="2025-11-21 14:28:17.826163735 +0000 UTC m=+3371.947696287" Nov 21 14:28:22 crc kubenswrapper[4904]: I1121 14:28:22.945828 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:22 crc kubenswrapper[4904]: I1121 14:28:22.946504 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:22 crc kubenswrapper[4904]: I1121 14:28:22.997495 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:23 crc kubenswrapper[4904]: I1121 14:28:23.968520 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:24 crc kubenswrapper[4904]: I1121 14:28:24.029076 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v579v"] Nov 21 14:28:25 crc kubenswrapper[4904]: I1121 14:28:25.938609 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v579v" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="registry-server" containerID="cri-o://8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430" gracePeriod=2 Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.464348 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.519142 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkl6d\" (UniqueName: \"kubernetes.io/projected/07cf9de1-90e1-4b7b-828f-c765466be191-kube-api-access-qkl6d\") pod \"07cf9de1-90e1-4b7b-828f-c765466be191\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.519484 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-catalog-content\") pod \"07cf9de1-90e1-4b7b-828f-c765466be191\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.519526 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-utilities\") pod \"07cf9de1-90e1-4b7b-828f-c765466be191\" (UID: \"07cf9de1-90e1-4b7b-828f-c765466be191\") " Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.520884 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-utilities" (OuterVolumeSpecName: "utilities") pod "07cf9de1-90e1-4b7b-828f-c765466be191" (UID: "07cf9de1-90e1-4b7b-828f-c765466be191"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.528179 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07cf9de1-90e1-4b7b-828f-c765466be191-kube-api-access-qkl6d" (OuterVolumeSpecName: "kube-api-access-qkl6d") pod "07cf9de1-90e1-4b7b-828f-c765466be191" (UID: "07cf9de1-90e1-4b7b-828f-c765466be191"). InnerVolumeSpecName "kube-api-access-qkl6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.580318 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07cf9de1-90e1-4b7b-828f-c765466be191" (UID: "07cf9de1-90e1-4b7b-828f-c765466be191"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.623048 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.623322 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07cf9de1-90e1-4b7b-828f-c765466be191-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.623389 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkl6d\" (UniqueName: \"kubernetes.io/projected/07cf9de1-90e1-4b7b-828f-c765466be191-kube-api-access-qkl6d\") on node \"crc\" DevicePath \"\"" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.955555 4904 generic.go:334] "Generic (PLEG): container finished" podID="07cf9de1-90e1-4b7b-828f-c765466be191" containerID="8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430" exitCode=0 Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.955629 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v579v" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.955625 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v579v" event={"ID":"07cf9de1-90e1-4b7b-828f-c765466be191","Type":"ContainerDied","Data":"8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430"} Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.955860 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v579v" event={"ID":"07cf9de1-90e1-4b7b-828f-c765466be191","Type":"ContainerDied","Data":"269ffd31458adbd6245a27d609742257268dddb50ff8a83c58b145126158cc61"} Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.955891 4904 scope.go:117] "RemoveContainer" containerID="8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430" Nov 21 14:28:26 crc kubenswrapper[4904]: I1121 14:28:26.987121 4904 scope.go:117] "RemoveContainer" containerID="1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.023671 4904 scope.go:117] "RemoveContainer" containerID="1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.027179 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v579v"] Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.041861 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v579v"] Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.078116 4904 scope.go:117] "RemoveContainer" containerID="8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430" Nov 21 14:28:27 crc kubenswrapper[4904]: E1121 14:28:27.078787 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430\": container with ID starting with 8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430 not found: ID does not exist" containerID="8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.078845 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430"} err="failed to get container status \"8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430\": rpc error: code = NotFound desc = could not find container \"8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430\": container with ID starting with 8f7e275f9b1bf20594a1d9ccd42235e79550f21d78eb0819c4b5b365e5449430 not found: ID does not exist" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.078880 4904 scope.go:117] "RemoveContainer" containerID="1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3" Nov 21 14:28:27 crc kubenswrapper[4904]: E1121 14:28:27.080118 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3\": container with ID starting with 1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3 not found: ID does not exist" containerID="1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.080183 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3"} err="failed to get container status \"1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3\": rpc error: code = NotFound desc = could not find container \"1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3\": container with ID starting with 1483db61f84d93334879a764abf60876604ac07b0f3019aaadaf8991955945b3 not found: ID does not exist" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.080226 4904 scope.go:117] "RemoveContainer" containerID="1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4" Nov 21 14:28:27 crc kubenswrapper[4904]: E1121 14:28:27.080702 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4\": container with ID starting with 1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4 not found: ID does not exist" containerID="1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4" Nov 21 14:28:27 crc kubenswrapper[4904]: I1121 14:28:27.080753 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4"} err="failed to get container status \"1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4\": rpc error: code = NotFound desc = could not find container \"1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4\": container with ID starting with 1f3959835a6ef09df569b78df41052591dac1abb834f9b91b7e5e6981e3e51a4 not found: ID does not exist" Nov 21 14:28:28 crc kubenswrapper[4904]: I1121 14:28:28.113438 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:28:28 crc kubenswrapper[4904]: I1121 14:28:28.113850 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:28:28 crc kubenswrapper[4904]: I1121 14:28:28.543484 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" path="/var/lib/kubelet/pods/07cf9de1-90e1-4b7b-828f-c765466be191/volumes" Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.113694 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.114464 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.114532 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.115687 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"633af6aaae1d82356d12d2154eb872c925ee1ebbcb5264c806e81d86e839558f"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.115748 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://633af6aaae1d82356d12d2154eb872c925ee1ebbcb5264c806e81d86e839558f" gracePeriod=600 Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.336036 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="633af6aaae1d82356d12d2154eb872c925ee1ebbcb5264c806e81d86e839558f" exitCode=0 Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.336097 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"633af6aaae1d82356d12d2154eb872c925ee1ebbcb5264c806e81d86e839558f"} Nov 21 14:28:58 crc kubenswrapper[4904]: I1121 14:28:58.336149 4904 scope.go:117] "RemoveContainer" containerID="7c3496c4bbaa6000fd0093e9cd3b74a8cff2f9563bc25196354705c18031f35d" Nov 21 14:28:59 crc kubenswrapper[4904]: I1121 14:28:59.353312 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19"} Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.201240 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl"] Nov 21 14:30:00 crc kubenswrapper[4904]: E1121 14:30:00.202990 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="registry-server" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.203020 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="registry-server" Nov 21 14:30:00 crc kubenswrapper[4904]: E1121 14:30:00.203041 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="extract-content" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.203049 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="extract-content" Nov 21 14:30:00 crc kubenswrapper[4904]: E1121 14:30:00.203067 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="extract-utilities" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.203076 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="extract-utilities" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.203466 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="07cf9de1-90e1-4b7b-828f-c765466be191" containerName="registry-server" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.204856 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.207759 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.210196 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-secret-volume\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.210300 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xws8\" (UniqueName: \"kubernetes.io/projected/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-kube-api-access-9xws8\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.210385 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-config-volume\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.211129 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.218723 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl"] Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.313324 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xws8\" (UniqueName: \"kubernetes.io/projected/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-kube-api-access-9xws8\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.313554 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-config-volume\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.313836 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-secret-volume\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.314780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-config-volume\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.323314 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-secret-volume\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.338995 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xws8\" (UniqueName: \"kubernetes.io/projected/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-kube-api-access-9xws8\") pod \"collect-profiles-29395590-htrvl\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:00 crc kubenswrapper[4904]: I1121 14:30:00.528758 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:01 crc kubenswrapper[4904]: I1121 14:30:01.011445 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl"] Nov 21 14:30:01 crc kubenswrapper[4904]: I1121 14:30:01.148419 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" event={"ID":"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96","Type":"ContainerStarted","Data":"09b0075d7b6b720c7e55f5e12ec2ed9bef13dcadcdba9076adcf908718e51da4"} Nov 21 14:30:02 crc kubenswrapper[4904]: I1121 14:30:02.165091 4904 generic.go:334] "Generic (PLEG): container finished" podID="1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" containerID="db5744a00aa33e6fd6a0465fd38de426218b26aa5172e6342b45aa4b560c58e7" exitCode=0 Nov 21 14:30:02 crc kubenswrapper[4904]: I1121 14:30:02.165223 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" event={"ID":"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96","Type":"ContainerDied","Data":"db5744a00aa33e6fd6a0465fd38de426218b26aa5172e6342b45aa4b560c58e7"} Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.606945 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.708907 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-secret-volume\") pod \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.709020 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xws8\" (UniqueName: \"kubernetes.io/projected/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-kube-api-access-9xws8\") pod \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.709304 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-config-volume\") pod \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\" (UID: \"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96\") " Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.710211 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" (UID: "1c83b3a3-48f6-49d9-9ed8-ad61323dcf96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.722628 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-kube-api-access-9xws8" (OuterVolumeSpecName: "kube-api-access-9xws8") pod "1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" (UID: "1c83b3a3-48f6-49d9-9ed8-ad61323dcf96"). InnerVolumeSpecName "kube-api-access-9xws8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.724211 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" (UID: "1c83b3a3-48f6-49d9-9ed8-ad61323dcf96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.813539 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.813601 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xws8\" (UniqueName: \"kubernetes.io/projected/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-kube-api-access-9xws8\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:03 crc kubenswrapper[4904]: I1121 14:30:03.813617 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:04 crc kubenswrapper[4904]: I1121 14:30:04.194068 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" Nov 21 14:30:04 crc kubenswrapper[4904]: I1121 14:30:04.197827 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl" event={"ID":"1c83b3a3-48f6-49d9-9ed8-ad61323dcf96","Type":"ContainerDied","Data":"09b0075d7b6b720c7e55f5e12ec2ed9bef13dcadcdba9076adcf908718e51da4"} Nov 21 14:30:04 crc kubenswrapper[4904]: I1121 14:30:04.198012 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09b0075d7b6b720c7e55f5e12ec2ed9bef13dcadcdba9076adcf908718e51da4" Nov 21 14:30:04 crc kubenswrapper[4904]: I1121 14:30:04.719684 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn"] Nov 21 14:30:04 crc kubenswrapper[4904]: I1121 14:30:04.732387 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395545-dkqzn"] Nov 21 14:30:06 crc kubenswrapper[4904]: I1121 14:30:06.541745 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713735aa-1fd7-4ee5-8106-9be266844186" path="/var/lib/kubelet/pods/713735aa-1fd7-4ee5-8106-9be266844186/volumes" Nov 21 14:30:29 crc kubenswrapper[4904]: I1121 14:30:29.611088 4904 scope.go:117] "RemoveContainer" containerID="5acbc35650d7bdf78a2e0a7d659d083b409421928c437f46f034205818038033" Nov 21 14:30:33 crc kubenswrapper[4904]: I1121 14:30:33.945048 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9"] Nov 21 14:30:33 crc kubenswrapper[4904]: I1121 14:30:33.963994 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm"] Nov 21 14:30:33 crc kubenswrapper[4904]: I1121 14:30:33.974439 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl"] Nov 21 14:30:33 crc kubenswrapper[4904]: I1121 14:30:33.986247 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:33.999963 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.012854 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.023094 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.035344 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-bk6zs"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.045892 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-tfppm"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.055972 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wmmw9"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.068791 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.081591 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.090744 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ss26p"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.099908 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.107866 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.117144 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.125898 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.135281 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.152166 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4qt6q"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.155117 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-9lwsl"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.165034 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-gkgjd"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.173080 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-rgx5n"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.181478 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qszbq"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.190210 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-x9spq"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.200483 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g52np"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.211705 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kpdcv"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.220545 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.229156 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ps56b"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.241929 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bzls9"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.251278 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-22dr7"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.260811 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-b42j6"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.269642 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-x9spq"] Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.530353 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c65fc0-a096-4ee6-ba62-cc26be39eda9" path="/var/lib/kubelet/pods/16c65fc0-a096-4ee6-ba62-cc26be39eda9/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.531264 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193fa023-e769-49e3-8936-f7454a7ccaca" path="/var/lib/kubelet/pods/193fa023-e769-49e3-8936-f7454a7ccaca/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.532164 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28761749-49f4-46db-8542-f82d6fb532c7" path="/var/lib/kubelet/pods/28761749-49f4-46db-8542-f82d6fb532c7/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.532948 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e44d23-28ea-460f-9742-9292290aecda" path="/var/lib/kubelet/pods/29e44d23-28ea-460f-9742-9292290aecda/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.534414 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3782dfd0-142f-4927-92fd-372372e50541" path="/var/lib/kubelet/pods/3782dfd0-142f-4927-92fd-372372e50541/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.538357 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59615e38-ebdb-4ce9-9922-942c2ff0d82c" path="/var/lib/kubelet/pods/59615e38-ebdb-4ce9-9922-942c2ff0d82c/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.539195 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6afae033-a8c3-4acb-a441-e7178c4bd031" path="/var/lib/kubelet/pods/6afae033-a8c3-4acb-a441-e7178c4bd031/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.540056 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70657906-3734-496b-b268-5b0ebbb6d6de" path="/var/lib/kubelet/pods/70657906-3734-496b-b268-5b0ebbb6d6de/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.541544 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b6e5c0f-f6b3-4843-9075-a069ffcadb7b" path="/var/lib/kubelet/pods/7b6e5c0f-f6b3-4843-9075-a069ffcadb7b/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.542564 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fc13b06-871e-4313-a9fd-2d87ba4347da" path="/var/lib/kubelet/pods/7fc13b06-871e-4313-a9fd-2d87ba4347da/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.543508 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a68773-3a8c-402f-ba62-6bd46459d2c0" path="/var/lib/kubelet/pods/86a68773-3a8c-402f-ba62-6bd46459d2c0/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.545131 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c093339a-118d-434d-a647-fff82c7882d2" path="/var/lib/kubelet/pods/c093339a-118d-434d-a647-fff82c7882d2/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.546199 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec" path="/var/lib/kubelet/pods/ccbcff93-5d1e-4f75-8cb7-b6c4a6ba46ec/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.547267 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e930ad7a-dc96-49b3-9d1b-2fe2fca33545" path="/var/lib/kubelet/pods/e930ad7a-dc96-49b3-9d1b-2fe2fca33545/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.548465 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5007ed-771b-41ad-be4b-f2d008a9217b" path="/var/lib/kubelet/pods/eb5007ed-771b-41ad-be4b-f2d008a9217b/volumes" Nov 21 14:30:34 crc kubenswrapper[4904]: I1121 14:30:34.550277 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c6830d-5463-465e-9d1e-eb695b465dbd" path="/var/lib/kubelet/pods/f2c6830d-5463-465e-9d1e-eb695b465dbd/volumes" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.859530 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc"] Nov 21 14:30:38 crc kubenswrapper[4904]: E1121 14:30:38.861594 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" containerName="collect-profiles" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.861620 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" containerName="collect-profiles" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.862273 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" containerName="collect-profiles" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.863740 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.873568 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.873803 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.873879 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.874000 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.874111 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.904611 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc"] Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.988204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.988603 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.988779 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.988856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:38 crc kubenswrapper[4904]: I1121 14:30:38.989102 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcbm4\" (UniqueName: \"kubernetes.io/projected/620a1fcb-b550-41c2-964f-f3212ff8a2d0-kube-api-access-hcbm4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.091968 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcbm4\" (UniqueName: \"kubernetes.io/projected/620a1fcb-b550-41c2-964f-f3212ff8a2d0-kube-api-access-hcbm4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.092114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.092175 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.092204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.092323 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.100023 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.100573 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.102252 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.103366 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.113586 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcbm4\" (UniqueName: \"kubernetes.io/projected/620a1fcb-b550-41c2-964f-f3212ff8a2d0-kube-api-access-hcbm4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.205394 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:39 crc kubenswrapper[4904]: I1121 14:30:39.798356 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc"] Nov 21 14:30:40 crc kubenswrapper[4904]: I1121 14:30:40.605449 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" event={"ID":"620a1fcb-b550-41c2-964f-f3212ff8a2d0","Type":"ContainerStarted","Data":"ba1c6f65a186f05d4dce3911529b83b1fd67032d94052b6dd5e0de192546b58b"} Nov 21 14:30:40 crc kubenswrapper[4904]: I1121 14:30:40.605844 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" event={"ID":"620a1fcb-b550-41c2-964f-f3212ff8a2d0","Type":"ContainerStarted","Data":"0c96e51f7f5bd984b0363277793585eebea817d3e22cc5235a09e4c9c5e4cb35"} Nov 21 14:30:54 crc kubenswrapper[4904]: I1121 14:30:54.785085 4904 generic.go:334] "Generic (PLEG): container finished" podID="620a1fcb-b550-41c2-964f-f3212ff8a2d0" containerID="ba1c6f65a186f05d4dce3911529b83b1fd67032d94052b6dd5e0de192546b58b" exitCode=0 Nov 21 14:30:54 crc kubenswrapper[4904]: I1121 14:30:54.785169 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" event={"ID":"620a1fcb-b550-41c2-964f-f3212ff8a2d0","Type":"ContainerDied","Data":"ba1c6f65a186f05d4dce3911529b83b1fd67032d94052b6dd5e0de192546b58b"} Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.320532 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.474616 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ceph\") pod \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.474722 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ssh-key\") pod \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.474757 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcbm4\" (UniqueName: \"kubernetes.io/projected/620a1fcb-b550-41c2-964f-f3212ff8a2d0-kube-api-access-hcbm4\") pod \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.474878 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-inventory\") pod \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.475076 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-repo-setup-combined-ca-bundle\") pod \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\" (UID: \"620a1fcb-b550-41c2-964f-f3212ff8a2d0\") " Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.482717 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620a1fcb-b550-41c2-964f-f3212ff8a2d0-kube-api-access-hcbm4" (OuterVolumeSpecName: "kube-api-access-hcbm4") pod "620a1fcb-b550-41c2-964f-f3212ff8a2d0" (UID: "620a1fcb-b550-41c2-964f-f3212ff8a2d0"). InnerVolumeSpecName "kube-api-access-hcbm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.483294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ceph" (OuterVolumeSpecName: "ceph") pod "620a1fcb-b550-41c2-964f-f3212ff8a2d0" (UID: "620a1fcb-b550-41c2-964f-f3212ff8a2d0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.483942 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "620a1fcb-b550-41c2-964f-f3212ff8a2d0" (UID: "620a1fcb-b550-41c2-964f-f3212ff8a2d0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.513157 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-inventory" (OuterVolumeSpecName: "inventory") pod "620a1fcb-b550-41c2-964f-f3212ff8a2d0" (UID: "620a1fcb-b550-41c2-964f-f3212ff8a2d0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.530860 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "620a1fcb-b550-41c2-964f-f3212ff8a2d0" (UID: "620a1fcb-b550-41c2-964f-f3212ff8a2d0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.578198 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.578249 4904 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.578263 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.578272 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/620a1fcb-b550-41c2-964f-f3212ff8a2d0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.578283 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcbm4\" (UniqueName: \"kubernetes.io/projected/620a1fcb-b550-41c2-964f-f3212ff8a2d0-kube-api-access-hcbm4\") on node \"crc\" DevicePath \"\"" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.812676 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" event={"ID":"620a1fcb-b550-41c2-964f-f3212ff8a2d0","Type":"ContainerDied","Data":"0c96e51f7f5bd984b0363277793585eebea817d3e22cc5235a09e4c9c5e4cb35"} Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.812734 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c96e51f7f5bd984b0363277793585eebea817d3e22cc5235a09e4c9c5e4cb35" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.813031 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.908563 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q"] Nov 21 14:30:56 crc kubenswrapper[4904]: E1121 14:30:56.909084 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="620a1fcb-b550-41c2-964f-f3212ff8a2d0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.909111 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="620a1fcb-b550-41c2-964f-f3212ff8a2d0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.909364 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="620a1fcb-b550-41c2-964f-f3212ff8a2d0" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.914500 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.916532 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.917274 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.917414 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.928670 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q"] Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.929400 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:30:56 crc kubenswrapper[4904]: I1121 14:30:56.929737 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.089472 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m7tp\" (UniqueName: \"kubernetes.io/projected/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-kube-api-access-5m7tp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.089968 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.090279 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.090612 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.092064 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.194197 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.194373 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.196319 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.196382 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m7tp\" (UniqueName: \"kubernetes.io/projected/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-kube-api-access-5m7tp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.196674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.201577 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.201891 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.202646 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.202887 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.218916 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m7tp\" (UniqueName: \"kubernetes.io/projected/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-kube-api-access-5m7tp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.287521 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:30:57 crc kubenswrapper[4904]: I1121 14:30:57.905978 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q"] Nov 21 14:30:57 crc kubenswrapper[4904]: W1121 14:30:57.921362 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a28f51f_d088_4c6b_aec9_9fcde8dd9b94.slice/crio-ffd701999d1dc5d1933fbbd140f563ea33b2cd5e47f4049e6e229c52f6e30c5a WatchSource:0}: Error finding container ffd701999d1dc5d1933fbbd140f563ea33b2cd5e47f4049e6e229c52f6e30c5a: Status 404 returned error can't find the container with id ffd701999d1dc5d1933fbbd140f563ea33b2cd5e47f4049e6e229c52f6e30c5a Nov 21 14:30:58 crc kubenswrapper[4904]: I1121 14:30:58.113284 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:30:58 crc kubenswrapper[4904]: I1121 14:30:58.113992 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:30:58 crc kubenswrapper[4904]: I1121 14:30:58.836994 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" event={"ID":"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94","Type":"ContainerStarted","Data":"0fbadf0d824220bf317451621495a138fa699385009c6e90e2aecb700594e81a"} Nov 21 14:30:58 crc kubenswrapper[4904]: I1121 14:30:58.837085 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" event={"ID":"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94","Type":"ContainerStarted","Data":"ffd701999d1dc5d1933fbbd140f563ea33b2cd5e47f4049e6e229c52f6e30c5a"} Nov 21 14:31:28 crc kubenswrapper[4904]: I1121 14:31:28.114198 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:31:28 crc kubenswrapper[4904]: I1121 14:31:28.114783 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:31:29 crc kubenswrapper[4904]: I1121 14:31:29.691169 4904 scope.go:117] "RemoveContainer" containerID="c9fb3fee4864cac6bc09705fb926ce9725efb86f5970c651b1f075609f80f0e5" Nov 21 14:31:29 crc kubenswrapper[4904]: I1121 14:31:29.769638 4904 scope.go:117] "RemoveContainer" containerID="f2f8f2857561e04916f01f486dba33f942a00347eb1f6b6fcddbbd7376ec5d3a" Nov 21 14:31:29 crc kubenswrapper[4904]: I1121 14:31:29.853525 4904 scope.go:117] "RemoveContainer" containerID="6774cc8d57618da61cea01e300b7095f7363bf883fc4a165aaf5822e066f543f" Nov 21 14:31:29 crc kubenswrapper[4904]: I1121 14:31:29.896718 4904 scope.go:117] "RemoveContainer" containerID="9cf4a63321a7cb41764cbaa9d1932886f53a2d66c54872eaf1e6f8be06623e61" Nov 21 14:31:29 crc kubenswrapper[4904]: I1121 14:31:29.964197 4904 scope.go:117] "RemoveContainer" containerID="bdde45774623c37671a3f404c9acd919211067df2189a485ae45ec6365d79618" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.008430 4904 scope.go:117] "RemoveContainer" containerID="6647fb1070de1ed8bdb82698e88bf6b20bac0499651b1b0c03f996a102584817" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.071461 4904 scope.go:117] "RemoveContainer" containerID="fc38a30100d602ffc813ed69dcd6e6c3f742e20347f8f2faba89295ad5e70558" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.126265 4904 scope.go:117] "RemoveContainer" containerID="ae6bce2a2ebc262a61d1e6d7c46787a1b8eb268af1907c1ec71095ad2538c1cb" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.198820 4904 scope.go:117] "RemoveContainer" containerID="74d4fd0e067790b8271e124c2f85954f4ea4a61a98713def6057ecddf7e52b9f" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.234302 4904 scope.go:117] "RemoveContainer" containerID="116bd5f733f6c9f509b56a4eafd3fac5ee3226e35efce867c27f2ab5ae11ea68" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.296781 4904 scope.go:117] "RemoveContainer" containerID="8bfa9d930d7572ee36a68be41f3595d1cfa675d1d959a77b900638e9e41d0dfd" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.336080 4904 scope.go:117] "RemoveContainer" containerID="8506a9659f382c37a0eae4d467d5b68098a5f740d321dd44f68986e7298a06c6" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.372925 4904 scope.go:117] "RemoveContainer" containerID="307f342d267b110f16c6a9854233bd486456514c2fea0c36e6f514bb39a89da8" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.440216 4904 scope.go:117] "RemoveContainer" containerID="87feb1c6987477a5e8445b56f03a655fd068c0981b3371ae2e48934cba478438" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.506947 4904 scope.go:117] "RemoveContainer" containerID="c3ff4f3f3f834ba5339446318848db972fc6d25c45fbc2a07ff406aabe913f00" Nov 21 14:31:30 crc kubenswrapper[4904]: I1121 14:31:30.549369 4904 scope.go:117] "RemoveContainer" containerID="05b61bac72fd287ff5340bc645cbbcf47ad31208bfee021304b6536b251712d1" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.113837 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.114426 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.114481 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.115385 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.115441 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" gracePeriod=600 Nov 21 14:31:58 crc kubenswrapper[4904]: E1121 14:31:58.272100 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.602372 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" exitCode=0 Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.603060 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19"} Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.603178 4904 scope.go:117] "RemoveContainer" containerID="633af6aaae1d82356d12d2154eb872c925ee1ebbcb5264c806e81d86e839558f" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.605064 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:31:58 crc kubenswrapper[4904]: E1121 14:31:58.605727 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:31:58 crc kubenswrapper[4904]: I1121 14:31:58.646174 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" podStartSLOduration=62.246156879 podStartE2EDuration="1m2.646124367s" podCreationTimestamp="2025-11-21 14:30:56 +0000 UTC" firstStartedPulling="2025-11-21 14:30:57.924971452 +0000 UTC m=+3532.046504004" lastFinishedPulling="2025-11-21 14:30:58.32493892 +0000 UTC m=+3532.446471492" observedRunningTime="2025-11-21 14:30:58.856441798 +0000 UTC m=+3532.977974350" watchObservedRunningTime="2025-11-21 14:31:58.646124367 +0000 UTC m=+3592.767656959" Nov 21 14:32:09 crc kubenswrapper[4904]: I1121 14:32:09.513128 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:32:09 crc kubenswrapper[4904]: E1121 14:32:09.514814 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:32:12 crc kubenswrapper[4904]: I1121 14:32:12.970907 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhnmr"] Nov 21 14:32:12 crc kubenswrapper[4904]: I1121 14:32:12.973899 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:12 crc kubenswrapper[4904]: I1121 14:32:12.987865 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhnmr"] Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.111639 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-catalog-content\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.111798 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttv9q\" (UniqueName: \"kubernetes.io/projected/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-kube-api-access-ttv9q\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.112125 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-utilities\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.215280 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttv9q\" (UniqueName: \"kubernetes.io/projected/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-kube-api-access-ttv9q\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.215491 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-utilities\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.215741 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-catalog-content\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.216406 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-utilities\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.216439 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-catalog-content\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.239251 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttv9q\" (UniqueName: \"kubernetes.io/projected/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-kube-api-access-ttv9q\") pod \"community-operators-vhnmr\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.315979 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:13 crc kubenswrapper[4904]: I1121 14:32:13.919257 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhnmr"] Nov 21 14:32:14 crc kubenswrapper[4904]: I1121 14:32:14.806581 4904 generic.go:334] "Generic (PLEG): container finished" podID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerID="b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5" exitCode=0 Nov 21 14:32:14 crc kubenswrapper[4904]: I1121 14:32:14.806650 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerDied","Data":"b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5"} Nov 21 14:32:14 crc kubenswrapper[4904]: I1121 14:32:14.807131 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerStarted","Data":"69ed0eac00885d0d44322d0dd992c97a63d20105afce8cd8deba70a1e83b5106"} Nov 21 14:32:15 crc kubenswrapper[4904]: I1121 14:32:15.866531 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerStarted","Data":"4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63"} Nov 21 14:32:16 crc kubenswrapper[4904]: I1121 14:32:16.886785 4904 generic.go:334] "Generic (PLEG): container finished" podID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerID="4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63" exitCode=0 Nov 21 14:32:16 crc kubenswrapper[4904]: I1121 14:32:16.886885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerDied","Data":"4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63"} Nov 21 14:32:17 crc kubenswrapper[4904]: I1121 14:32:17.906249 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerStarted","Data":"c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815"} Nov 21 14:32:17 crc kubenswrapper[4904]: I1121 14:32:17.957886 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhnmr" podStartSLOduration=3.507371761 podStartE2EDuration="5.957851502s" podCreationTimestamp="2025-11-21 14:32:12 +0000 UTC" firstStartedPulling="2025-11-21 14:32:14.811144213 +0000 UTC m=+3608.932676785" lastFinishedPulling="2025-11-21 14:32:17.261623944 +0000 UTC m=+3611.383156526" observedRunningTime="2025-11-21 14:32:17.934477007 +0000 UTC m=+3612.056009569" watchObservedRunningTime="2025-11-21 14:32:17.957851502 +0000 UTC m=+3612.079384064" Nov 21 14:32:22 crc kubenswrapper[4904]: I1121 14:32:22.513839 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:32:22 crc kubenswrapper[4904]: E1121 14:32:22.515976 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:32:23 crc kubenswrapper[4904]: I1121 14:32:23.316866 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:23 crc kubenswrapper[4904]: I1121 14:32:23.317328 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:23 crc kubenswrapper[4904]: I1121 14:32:23.369230 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:24 crc kubenswrapper[4904]: I1121 14:32:24.069837 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:24 crc kubenswrapper[4904]: I1121 14:32:24.133761 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhnmr"] Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.013983 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vhnmr" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="registry-server" containerID="cri-o://c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815" gracePeriod=2 Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.497968 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.654641 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttv9q\" (UniqueName: \"kubernetes.io/projected/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-kube-api-access-ttv9q\") pod \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.654881 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-catalog-content\") pod \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.655501 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-utilities\") pod \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\" (UID: \"79db4d68-3b44-46cb-9c7c-ed293b1a40d9\") " Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.656325 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-utilities" (OuterVolumeSpecName: "utilities") pod "79db4d68-3b44-46cb-9c7c-ed293b1a40d9" (UID: "79db4d68-3b44-46cb-9c7c-ed293b1a40d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.657287 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.661330 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-kube-api-access-ttv9q" (OuterVolumeSpecName: "kube-api-access-ttv9q") pod "79db4d68-3b44-46cb-9c7c-ed293b1a40d9" (UID: "79db4d68-3b44-46cb-9c7c-ed293b1a40d9"). InnerVolumeSpecName "kube-api-access-ttv9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.717094 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79db4d68-3b44-46cb-9c7c-ed293b1a40d9" (UID: "79db4d68-3b44-46cb-9c7c-ed293b1a40d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.758071 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttv9q\" (UniqueName: \"kubernetes.io/projected/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-kube-api-access-ttv9q\") on node \"crc\" DevicePath \"\"" Nov 21 14:32:26 crc kubenswrapper[4904]: I1121 14:32:26.758107 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79db4d68-3b44-46cb-9c7c-ed293b1a40d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.026458 4904 generic.go:334] "Generic (PLEG): container finished" podID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerID="c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815" exitCode=0 Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.026547 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerDied","Data":"c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815"} Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.026827 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhnmr" event={"ID":"79db4d68-3b44-46cb-9c7c-ed293b1a40d9","Type":"ContainerDied","Data":"69ed0eac00885d0d44322d0dd992c97a63d20105afce8cd8deba70a1e83b5106"} Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.026862 4904 scope.go:117] "RemoveContainer" containerID="c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.026560 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhnmr" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.052066 4904 scope.go:117] "RemoveContainer" containerID="4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.094234 4904 scope.go:117] "RemoveContainer" containerID="b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.095100 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhnmr"] Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.110716 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vhnmr"] Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.158984 4904 scope.go:117] "RemoveContainer" containerID="c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815" Nov 21 14:32:27 crc kubenswrapper[4904]: E1121 14:32:27.159610 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815\": container with ID starting with c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815 not found: ID does not exist" containerID="c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.159691 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815"} err="failed to get container status \"c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815\": rpc error: code = NotFound desc = could not find container \"c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815\": container with ID starting with c0c5493068ca4235d41091250bd92296a6117405f57b7f024fc2e6875785a815 not found: ID does not exist" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.159738 4904 scope.go:117] "RemoveContainer" containerID="4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63" Nov 21 14:32:27 crc kubenswrapper[4904]: E1121 14:32:27.160113 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63\": container with ID starting with 4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63 not found: ID does not exist" containerID="4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.160182 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63"} err="failed to get container status \"4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63\": rpc error: code = NotFound desc = could not find container \"4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63\": container with ID starting with 4ce824f5cd1201d04228ba5a827c26cd13430d8f664e6118313c43c546e24d63 not found: ID does not exist" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.160228 4904 scope.go:117] "RemoveContainer" containerID="b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5" Nov 21 14:32:27 crc kubenswrapper[4904]: E1121 14:32:27.160604 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5\": container with ID starting with b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5 not found: ID does not exist" containerID="b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5" Nov 21 14:32:27 crc kubenswrapper[4904]: I1121 14:32:27.160662 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5"} err="failed to get container status \"b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5\": rpc error: code = NotFound desc = could not find container \"b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5\": container with ID starting with b1bc4c1e17fdd6d4ef6c16118393b5958527651228da96e854f08fa1177af1d5 not found: ID does not exist" Nov 21 14:32:28 crc kubenswrapper[4904]: I1121 14:32:28.526795 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" path="/var/lib/kubelet/pods/79db4d68-3b44-46cb-9c7c-ed293b1a40d9/volumes" Nov 21 14:32:35 crc kubenswrapper[4904]: I1121 14:32:35.514150 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:32:35 crc kubenswrapper[4904]: E1121 14:32:35.515606 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:32:50 crc kubenswrapper[4904]: I1121 14:32:50.513331 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:32:50 crc kubenswrapper[4904]: E1121 14:32:50.514203 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:33:01 crc kubenswrapper[4904]: I1121 14:33:01.513911 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:33:01 crc kubenswrapper[4904]: E1121 14:33:01.514736 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:33:14 crc kubenswrapper[4904]: I1121 14:33:14.513193 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:33:14 crc kubenswrapper[4904]: E1121 14:33:14.514098 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:33:22 crc kubenswrapper[4904]: I1121 14:33:22.616984 4904 generic.go:334] "Generic (PLEG): container finished" podID="0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" containerID="0fbadf0d824220bf317451621495a138fa699385009c6e90e2aecb700594e81a" exitCode=0 Nov 21 14:33:22 crc kubenswrapper[4904]: I1121 14:33:22.617091 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" event={"ID":"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94","Type":"ContainerDied","Data":"0fbadf0d824220bf317451621495a138fa699385009c6e90e2aecb700594e81a"} Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.115730 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.262385 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-bootstrap-combined-ca-bundle\") pod \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.262462 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ceph\") pod \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.262636 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-inventory\") pod \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.262701 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ssh-key\") pod \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.262816 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m7tp\" (UniqueName: \"kubernetes.io/projected/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-kube-api-access-5m7tp\") pod \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\" (UID: \"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94\") " Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.271020 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-kube-api-access-5m7tp" (OuterVolumeSpecName: "kube-api-access-5m7tp") pod "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" (UID: "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94"). InnerVolumeSpecName "kube-api-access-5m7tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.271154 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ceph" (OuterVolumeSpecName: "ceph") pod "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" (UID: "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.272047 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" (UID: "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.299338 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-inventory" (OuterVolumeSpecName: "inventory") pod "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" (UID: "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.300346 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" (UID: "0a28f51f-d088-4c6b-aec9-9fcde8dd9b94"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.366214 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m7tp\" (UniqueName: \"kubernetes.io/projected/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-kube-api-access-5m7tp\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.366265 4904 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.366279 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.366292 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.366304 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a28f51f-d088-4c6b-aec9-9fcde8dd9b94-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.644108 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" event={"ID":"0a28f51f-d088-4c6b-aec9-9fcde8dd9b94","Type":"ContainerDied","Data":"ffd701999d1dc5d1933fbbd140f563ea33b2cd5e47f4049e6e229c52f6e30c5a"} Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.644149 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.644167 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd701999d1dc5d1933fbbd140f563ea33b2cd5e47f4049e6e229c52f6e30c5a" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.755915 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm"] Nov 21 14:33:24 crc kubenswrapper[4904]: E1121 14:33:24.756477 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="extract-content" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.756497 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="extract-content" Nov 21 14:33:24 crc kubenswrapper[4904]: E1121 14:33:24.756531 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.756539 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:33:24 crc kubenswrapper[4904]: E1121 14:33:24.756548 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="extract-utilities" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.756554 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="extract-utilities" Nov 21 14:33:24 crc kubenswrapper[4904]: E1121 14:33:24.756577 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="registry-server" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.756584 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="registry-server" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.756815 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="79db4d68-3b44-46cb-9c7c-ed293b1a40d9" containerName="registry-server" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.756837 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a28f51f-d088-4c6b-aec9-9fcde8dd9b94" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.757753 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.762785 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.762906 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.762906 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.763060 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.779969 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm"] Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.781270 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.880630 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.881059 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.881241 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpqk\" (UniqueName: \"kubernetes.io/projected/48746c32-dedf-4967-ba11-9765f1a17ec7-kube-api-access-gfpqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.881295 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.983406 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.983536 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.983689 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.983761 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpqk\" (UniqueName: \"kubernetes.io/projected/48746c32-dedf-4967-ba11-9765f1a17ec7-kube-api-access-gfpqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.988371 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.989559 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:24 crc kubenswrapper[4904]: I1121 14:33:24.989769 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:25 crc kubenswrapper[4904]: I1121 14:33:25.003484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpqk\" (UniqueName: \"kubernetes.io/projected/48746c32-dedf-4967-ba11-9765f1a17ec7-kube-api-access-gfpqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:25 crc kubenswrapper[4904]: I1121 14:33:25.134168 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:25 crc kubenswrapper[4904]: I1121 14:33:25.752373 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm"] Nov 21 14:33:25 crc kubenswrapper[4904]: I1121 14:33:25.755458 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:33:26 crc kubenswrapper[4904]: I1121 14:33:26.666559 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" event={"ID":"48746c32-dedf-4967-ba11-9765f1a17ec7","Type":"ContainerStarted","Data":"6ee3a10e7322057388eaa8aa325ea578d5002f3453dfc328e4cb67b180590002"} Nov 21 14:33:26 crc kubenswrapper[4904]: I1121 14:33:26.667124 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" event={"ID":"48746c32-dedf-4967-ba11-9765f1a17ec7","Type":"ContainerStarted","Data":"1908189ee5dcf517c0f144a2392f268d337401af314cfc8c70d38e338f2e3983"} Nov 21 14:33:26 crc kubenswrapper[4904]: I1121 14:33:26.690383 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" podStartSLOduration=2.275116829 podStartE2EDuration="2.690362666s" podCreationTimestamp="2025-11-21 14:33:24 +0000 UTC" firstStartedPulling="2025-11-21 14:33:25.755190371 +0000 UTC m=+3679.876722923" lastFinishedPulling="2025-11-21 14:33:26.170436208 +0000 UTC m=+3680.291968760" observedRunningTime="2025-11-21 14:33:26.684210087 +0000 UTC m=+3680.805742649" watchObservedRunningTime="2025-11-21 14:33:26.690362666 +0000 UTC m=+3680.811895218" Nov 21 14:33:29 crc kubenswrapper[4904]: I1121 14:33:29.513637 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:33:29 crc kubenswrapper[4904]: E1121 14:33:29.514523 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:33:43 crc kubenswrapper[4904]: I1121 14:33:43.513952 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:33:43 crc kubenswrapper[4904]: E1121 14:33:43.514795 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:33:56 crc kubenswrapper[4904]: I1121 14:33:56.525058 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:33:56 crc kubenswrapper[4904]: E1121 14:33:56.526998 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:33:58 crc kubenswrapper[4904]: I1121 14:33:58.075583 4904 generic.go:334] "Generic (PLEG): container finished" podID="48746c32-dedf-4967-ba11-9765f1a17ec7" containerID="6ee3a10e7322057388eaa8aa325ea578d5002f3453dfc328e4cb67b180590002" exitCode=0 Nov 21 14:33:58 crc kubenswrapper[4904]: I1121 14:33:58.075684 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" event={"ID":"48746c32-dedf-4967-ba11-9765f1a17ec7","Type":"ContainerDied","Data":"6ee3a10e7322057388eaa8aa325ea578d5002f3453dfc328e4cb67b180590002"} Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.547999 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.618111 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-inventory\") pod \"48746c32-dedf-4967-ba11-9765f1a17ec7\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.618323 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ssh-key\") pod \"48746c32-dedf-4967-ba11-9765f1a17ec7\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.618414 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfpqk\" (UniqueName: \"kubernetes.io/projected/48746c32-dedf-4967-ba11-9765f1a17ec7-kube-api-access-gfpqk\") pod \"48746c32-dedf-4967-ba11-9765f1a17ec7\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.618514 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ceph\") pod \"48746c32-dedf-4967-ba11-9765f1a17ec7\" (UID: \"48746c32-dedf-4967-ba11-9765f1a17ec7\") " Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.623696 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48746c32-dedf-4967-ba11-9765f1a17ec7-kube-api-access-gfpqk" (OuterVolumeSpecName: "kube-api-access-gfpqk") pod "48746c32-dedf-4967-ba11-9765f1a17ec7" (UID: "48746c32-dedf-4967-ba11-9765f1a17ec7"). InnerVolumeSpecName "kube-api-access-gfpqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.623712 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ceph" (OuterVolumeSpecName: "ceph") pod "48746c32-dedf-4967-ba11-9765f1a17ec7" (UID: "48746c32-dedf-4967-ba11-9765f1a17ec7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.649088 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48746c32-dedf-4967-ba11-9765f1a17ec7" (UID: "48746c32-dedf-4967-ba11-9765f1a17ec7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.649462 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-inventory" (OuterVolumeSpecName: "inventory") pod "48746c32-dedf-4967-ba11-9765f1a17ec7" (UID: "48746c32-dedf-4967-ba11-9765f1a17ec7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.720260 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.720623 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfpqk\" (UniqueName: \"kubernetes.io/projected/48746c32-dedf-4967-ba11-9765f1a17ec7-kube-api-access-gfpqk\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.720699 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:33:59 crc kubenswrapper[4904]: I1121 14:33:59.720761 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48746c32-dedf-4967-ba11-9765f1a17ec7-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.095968 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" event={"ID":"48746c32-dedf-4967-ba11-9765f1a17ec7","Type":"ContainerDied","Data":"1908189ee5dcf517c0f144a2392f268d337401af314cfc8c70d38e338f2e3983"} Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.096039 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1908189ee5dcf517c0f144a2392f268d337401af314cfc8c70d38e338f2e3983" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.096003 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.187142 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2"] Nov 21 14:34:00 crc kubenswrapper[4904]: E1121 14:34:00.187675 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48746c32-dedf-4967-ba11-9765f1a17ec7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.187692 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="48746c32-dedf-4967-ba11-9765f1a17ec7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.187914 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="48746c32-dedf-4967-ba11-9765f1a17ec7" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.188876 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.191039 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.192640 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.192888 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.193007 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.193396 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.212503 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2"] Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.230206 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.230291 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.230365 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.230422 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8bg6\" (UniqueName: \"kubernetes.io/projected/81de75d6-c869-454d-a2a0-09557d478c99-kube-api-access-n8bg6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.332108 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.332168 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.332219 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.332263 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8bg6\" (UniqueName: \"kubernetes.io/projected/81de75d6-c869-454d-a2a0-09557d478c99-kube-api-access-n8bg6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.336484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.336873 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.337416 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.350588 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8bg6\" (UniqueName: \"kubernetes.io/projected/81de75d6-c869-454d-a2a0-09557d478c99-kube-api-access-n8bg6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:00 crc kubenswrapper[4904]: I1121 14:34:00.509896 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:01 crc kubenswrapper[4904]: I1121 14:34:01.153689 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2"] Nov 21 14:34:02 crc kubenswrapper[4904]: I1121 14:34:02.120149 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" event={"ID":"81de75d6-c869-454d-a2a0-09557d478c99","Type":"ContainerStarted","Data":"4be1590c1b6f9110a8414a5a275a7d4d87fada27c0b5cdd48cdceff3b119b25e"} Nov 21 14:34:02 crc kubenswrapper[4904]: I1121 14:34:02.120508 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" event={"ID":"81de75d6-c869-454d-a2a0-09557d478c99","Type":"ContainerStarted","Data":"11564b0a407401e4f10a6c94b90c0cdee709ee861d887d24d63a50a6452eddd2"} Nov 21 14:34:02 crc kubenswrapper[4904]: I1121 14:34:02.142076 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" podStartSLOduration=1.495364486 podStartE2EDuration="2.142053747s" podCreationTimestamp="2025-11-21 14:34:00 +0000 UTC" firstStartedPulling="2025-11-21 14:34:01.165073262 +0000 UTC m=+3715.286605814" lastFinishedPulling="2025-11-21 14:34:01.811762513 +0000 UTC m=+3715.933295075" observedRunningTime="2025-11-21 14:34:02.134903104 +0000 UTC m=+3716.256435676" watchObservedRunningTime="2025-11-21 14:34:02.142053747 +0000 UTC m=+3716.263586319" Nov 21 14:34:08 crc kubenswrapper[4904]: I1121 14:34:08.513367 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:34:08 crc kubenswrapper[4904]: E1121 14:34:08.514511 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:34:09 crc kubenswrapper[4904]: I1121 14:34:09.199233 4904 generic.go:334] "Generic (PLEG): container finished" podID="81de75d6-c869-454d-a2a0-09557d478c99" containerID="4be1590c1b6f9110a8414a5a275a7d4d87fada27c0b5cdd48cdceff3b119b25e" exitCode=0 Nov 21 14:34:09 crc kubenswrapper[4904]: I1121 14:34:09.199288 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" event={"ID":"81de75d6-c869-454d-a2a0-09557d478c99","Type":"ContainerDied","Data":"4be1590c1b6f9110a8414a5a275a7d4d87fada27c0b5cdd48cdceff3b119b25e"} Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.677913 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.776199 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8bg6\" (UniqueName: \"kubernetes.io/projected/81de75d6-c869-454d-a2a0-09557d478c99-kube-api-access-n8bg6\") pod \"81de75d6-c869-454d-a2a0-09557d478c99\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.777841 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-inventory\") pod \"81de75d6-c869-454d-a2a0-09557d478c99\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.777891 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key\") pod \"81de75d6-c869-454d-a2a0-09557d478c99\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.777986 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ceph\") pod \"81de75d6-c869-454d-a2a0-09557d478c99\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.783169 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ceph" (OuterVolumeSpecName: "ceph") pod "81de75d6-c869-454d-a2a0-09557d478c99" (UID: "81de75d6-c869-454d-a2a0-09557d478c99"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.788268 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81de75d6-c869-454d-a2a0-09557d478c99-kube-api-access-n8bg6" (OuterVolumeSpecName: "kube-api-access-n8bg6") pod "81de75d6-c869-454d-a2a0-09557d478c99" (UID: "81de75d6-c869-454d-a2a0-09557d478c99"). InnerVolumeSpecName "kube-api-access-n8bg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:34:10 crc kubenswrapper[4904]: E1121 14:34:10.806516 4904 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key podName:81de75d6-c869-454d-a2a0-09557d478c99 nodeName:}" failed. No retries permitted until 2025-11-21 14:34:11.30646014 +0000 UTC m=+3725.427992712 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key") pod "81de75d6-c869-454d-a2a0-09557d478c99" (UID: "81de75d6-c869-454d-a2a0-09557d478c99") : error deleting /var/lib/kubelet/pods/81de75d6-c869-454d-a2a0-09557d478c99/volume-subpaths: remove /var/lib/kubelet/pods/81de75d6-c869-454d-a2a0-09557d478c99/volume-subpaths: no such file or directory Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.810681 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-inventory" (OuterVolumeSpecName: "inventory") pod "81de75d6-c869-454d-a2a0-09557d478c99" (UID: "81de75d6-c869-454d-a2a0-09557d478c99"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.881780 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8bg6\" (UniqueName: \"kubernetes.io/projected/81de75d6-c869-454d-a2a0-09557d478c99-kube-api-access-n8bg6\") on node \"crc\" DevicePath \"\"" Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.881819 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:34:10 crc kubenswrapper[4904]: I1121 14:34:10.881828 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.220290 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" event={"ID":"81de75d6-c869-454d-a2a0-09557d478c99","Type":"ContainerDied","Data":"11564b0a407401e4f10a6c94b90c0cdee709ee861d887d24d63a50a6452eddd2"} Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.220610 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11564b0a407401e4f10a6c94b90c0cdee709ee861d887d24d63a50a6452eddd2" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.220345 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.315566 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq"] Nov 21 14:34:11 crc kubenswrapper[4904]: E1121 14:34:11.316371 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81de75d6-c869-454d-a2a0-09557d478c99" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.316455 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="81de75d6-c869-454d-a2a0-09557d478c99" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.316809 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="81de75d6-c869-454d-a2a0-09557d478c99" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.317905 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.331544 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq"] Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.394247 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key\") pod \"81de75d6-c869-454d-a2a0-09557d478c99\" (UID: \"81de75d6-c869-454d-a2a0-09557d478c99\") " Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.404435 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81de75d6-c869-454d-a2a0-09557d478c99" (UID: "81de75d6-c869-454d-a2a0-09557d478c99"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.497724 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rml7z\" (UniqueName: \"kubernetes.io/projected/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-kube-api-access-rml7z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.497826 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.497941 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.498000 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.498307 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81de75d6-c869-454d-a2a0-09557d478c99-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.600264 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rml7z\" (UniqueName: \"kubernetes.io/projected/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-kube-api-access-rml7z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.600321 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.600434 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.600468 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.605036 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.605037 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.606500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.627349 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rml7z\" (UniqueName: \"kubernetes.io/projected/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-kube-api-access-rml7z\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qzzvq\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:11 crc kubenswrapper[4904]: I1121 14:34:11.641858 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:34:12 crc kubenswrapper[4904]: I1121 14:34:12.170280 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq"] Nov 21 14:34:12 crc kubenswrapper[4904]: W1121 14:34:12.172229 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5863ea30_41cd_46f1_9c0f_95d2367aa9aa.slice/crio-8a7ddceb9dfce87468cbe9e12ae4a5e9b1df449224f3a5f0e5c6af05f076e788 WatchSource:0}: Error finding container 8a7ddceb9dfce87468cbe9e12ae4a5e9b1df449224f3a5f0e5c6af05f076e788: Status 404 returned error can't find the container with id 8a7ddceb9dfce87468cbe9e12ae4a5e9b1df449224f3a5f0e5c6af05f076e788 Nov 21 14:34:12 crc kubenswrapper[4904]: I1121 14:34:12.230161 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" event={"ID":"5863ea30-41cd-46f1-9c0f-95d2367aa9aa","Type":"ContainerStarted","Data":"8a7ddceb9dfce87468cbe9e12ae4a5e9b1df449224f3a5f0e5c6af05f076e788"} Nov 21 14:34:13 crc kubenswrapper[4904]: I1121 14:34:13.241083 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" event={"ID":"5863ea30-41cd-46f1-9c0f-95d2367aa9aa","Type":"ContainerStarted","Data":"c211a76db156b0b51f8b42ee671d77dbbe352183d1ab14e423139bef96e58349"} Nov 21 14:34:13 crc kubenswrapper[4904]: I1121 14:34:13.261744 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" podStartSLOduration=1.792206948 podStartE2EDuration="2.261718427s" podCreationTimestamp="2025-11-21 14:34:11 +0000 UTC" firstStartedPulling="2025-11-21 14:34:12.174644251 +0000 UTC m=+3726.296176803" lastFinishedPulling="2025-11-21 14:34:12.64415573 +0000 UTC m=+3726.765688282" observedRunningTime="2025-11-21 14:34:13.260353914 +0000 UTC m=+3727.381886466" watchObservedRunningTime="2025-11-21 14:34:13.261718427 +0000 UTC m=+3727.383250989" Nov 21 14:34:20 crc kubenswrapper[4904]: I1121 14:34:20.514497 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:34:20 crc kubenswrapper[4904]: E1121 14:34:20.515458 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:34:35 crc kubenswrapper[4904]: I1121 14:34:35.514398 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:34:35 crc kubenswrapper[4904]: E1121 14:34:35.516352 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:34:46 crc kubenswrapper[4904]: I1121 14:34:46.513223 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:34:46 crc kubenswrapper[4904]: E1121 14:34:46.514100 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:35:00 crc kubenswrapper[4904]: I1121 14:35:00.513407 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:35:00 crc kubenswrapper[4904]: E1121 14:35:00.514439 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:35:04 crc kubenswrapper[4904]: I1121 14:35:04.825048 4904 generic.go:334] "Generic (PLEG): container finished" podID="5863ea30-41cd-46f1-9c0f-95d2367aa9aa" containerID="c211a76db156b0b51f8b42ee671d77dbbe352183d1ab14e423139bef96e58349" exitCode=0 Nov 21 14:35:04 crc kubenswrapper[4904]: I1121 14:35:04.825279 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" event={"ID":"5863ea30-41cd-46f1-9c0f-95d2367aa9aa","Type":"ContainerDied","Data":"c211a76db156b0b51f8b42ee671d77dbbe352183d1ab14e423139bef96e58349"} Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.315781 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.390133 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ssh-key\") pod \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.390559 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rml7z\" (UniqueName: \"kubernetes.io/projected/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-kube-api-access-rml7z\") pod \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.390870 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-inventory\") pod \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.391181 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ceph\") pod \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\" (UID: \"5863ea30-41cd-46f1-9c0f-95d2367aa9aa\") " Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.398391 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-kube-api-access-rml7z" (OuterVolumeSpecName: "kube-api-access-rml7z") pod "5863ea30-41cd-46f1-9c0f-95d2367aa9aa" (UID: "5863ea30-41cd-46f1-9c0f-95d2367aa9aa"). InnerVolumeSpecName "kube-api-access-rml7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.399239 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ceph" (OuterVolumeSpecName: "ceph") pod "5863ea30-41cd-46f1-9c0f-95d2367aa9aa" (UID: "5863ea30-41cd-46f1-9c0f-95d2367aa9aa"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.422326 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5863ea30-41cd-46f1-9c0f-95d2367aa9aa" (UID: "5863ea30-41cd-46f1-9c0f-95d2367aa9aa"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.424511 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-inventory" (OuterVolumeSpecName: "inventory") pod "5863ea30-41cd-46f1-9c0f-95d2367aa9aa" (UID: "5863ea30-41cd-46f1-9c0f-95d2367aa9aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.498697 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.498844 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rml7z\" (UniqueName: \"kubernetes.io/projected/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-kube-api-access-rml7z\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.498982 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.499064 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5863ea30-41cd-46f1-9c0f-95d2367aa9aa-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.856269 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" event={"ID":"5863ea30-41cd-46f1-9c0f-95d2367aa9aa","Type":"ContainerDied","Data":"8a7ddceb9dfce87468cbe9e12ae4a5e9b1df449224f3a5f0e5c6af05f076e788"} Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.856582 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a7ddceb9dfce87468cbe9e12ae4a5e9b1df449224f3a5f0e5c6af05f076e788" Nov 21 14:35:06 crc kubenswrapper[4904]: I1121 14:35:06.856419 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qzzvq" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.000788 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn"] Nov 21 14:35:07 crc kubenswrapper[4904]: E1121 14:35:07.001403 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5863ea30-41cd-46f1-9c0f-95d2367aa9aa" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.001434 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5863ea30-41cd-46f1-9c0f-95d2367aa9aa" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.001714 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5863ea30-41cd-46f1-9c0f-95d2367aa9aa" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.002673 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.005048 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.005088 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.005284 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.005857 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.008085 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.018513 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn"] Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.108748 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7ntf\" (UniqueName: \"kubernetes.io/projected/9eb481be-e58b-4b7a-be65-188c8c4c6d70-kube-api-access-f7ntf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.108815 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.108853 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.108901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.211302 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.211379 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.211443 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.211554 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7ntf\" (UniqueName: \"kubernetes.io/projected/9eb481be-e58b-4b7a-be65-188c8c4c6d70-kube-api-access-f7ntf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.215935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.221636 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.226179 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.226908 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7ntf\" (UniqueName: \"kubernetes.io/projected/9eb481be-e58b-4b7a-be65-188c8c4c6d70-kube-api-access-f7ntf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.328739 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:07 crc kubenswrapper[4904]: I1121 14:35:07.887687 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn"] Nov 21 14:35:08 crc kubenswrapper[4904]: I1121 14:35:08.888179 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" event={"ID":"9eb481be-e58b-4b7a-be65-188c8c4c6d70","Type":"ContainerStarted","Data":"d4127d780f95d15ede89cd04140bf32960093feac58aba1f2389bdae30d661dd"} Nov 21 14:35:08 crc kubenswrapper[4904]: I1121 14:35:08.888505 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" event={"ID":"9eb481be-e58b-4b7a-be65-188c8c4c6d70","Type":"ContainerStarted","Data":"7e01f0cfa39fbb29fb6066ee91a0912d27f792c109e76eff4b8bbf2fac04b9b4"} Nov 21 14:35:08 crc kubenswrapper[4904]: I1121 14:35:08.910389 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" podStartSLOduration=2.463542822 podStartE2EDuration="2.910372392s" podCreationTimestamp="2025-11-21 14:35:06 +0000 UTC" firstStartedPulling="2025-11-21 14:35:07.900735238 +0000 UTC m=+3782.022267790" lastFinishedPulling="2025-11-21 14:35:08.347564788 +0000 UTC m=+3782.469097360" observedRunningTime="2025-11-21 14:35:08.90533157 +0000 UTC m=+3783.026864122" watchObservedRunningTime="2025-11-21 14:35:08.910372392 +0000 UTC m=+3783.031904944" Nov 21 14:35:14 crc kubenswrapper[4904]: I1121 14:35:14.966572 4904 generic.go:334] "Generic (PLEG): container finished" podID="9eb481be-e58b-4b7a-be65-188c8c4c6d70" containerID="d4127d780f95d15ede89cd04140bf32960093feac58aba1f2389bdae30d661dd" exitCode=0 Nov 21 14:35:14 crc kubenswrapper[4904]: I1121 14:35:14.966704 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" event={"ID":"9eb481be-e58b-4b7a-be65-188c8c4c6d70","Type":"ContainerDied","Data":"d4127d780f95d15ede89cd04140bf32960093feac58aba1f2389bdae30d661dd"} Nov 21 14:35:15 crc kubenswrapper[4904]: I1121 14:35:15.514125 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:35:15 crc kubenswrapper[4904]: E1121 14:35:15.514833 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:35:16 crc kubenswrapper[4904]: I1121 14:35:16.447185 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-twpnr" podUID="62b52056-10f9-4150-ae0f-b443b165074d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.191159 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.273141 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7ntf\" (UniqueName: \"kubernetes.io/projected/9eb481be-e58b-4b7a-be65-188c8c4c6d70-kube-api-access-f7ntf\") pod \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.273192 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-inventory\") pod \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.273420 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ssh-key\") pod \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.273513 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ceph\") pod \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\" (UID: \"9eb481be-e58b-4b7a-be65-188c8c4c6d70\") " Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.298935 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb481be-e58b-4b7a-be65-188c8c4c6d70-kube-api-access-f7ntf" (OuterVolumeSpecName: "kube-api-access-f7ntf") pod "9eb481be-e58b-4b7a-be65-188c8c4c6d70" (UID: "9eb481be-e58b-4b7a-be65-188c8c4c6d70"). InnerVolumeSpecName "kube-api-access-f7ntf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.298978 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ceph" (OuterVolumeSpecName: "ceph") pod "9eb481be-e58b-4b7a-be65-188c8c4c6d70" (UID: "9eb481be-e58b-4b7a-be65-188c8c4c6d70"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.310642 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-inventory" (OuterVolumeSpecName: "inventory") pod "9eb481be-e58b-4b7a-be65-188c8c4c6d70" (UID: "9eb481be-e58b-4b7a-be65-188c8c4c6d70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.319325 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9eb481be-e58b-4b7a-be65-188c8c4c6d70" (UID: "9eb481be-e58b-4b7a-be65-188c8c4c6d70"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.377571 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.377609 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.377622 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7ntf\" (UniqueName: \"kubernetes.io/projected/9eb481be-e58b-4b7a-be65-188c8c4c6d70-kube-api-access-f7ntf\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:17 crc kubenswrapper[4904]: I1121 14:35:17.377632 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9eb481be-e58b-4b7a-be65-188c8c4c6d70-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.001221 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" event={"ID":"9eb481be-e58b-4b7a-be65-188c8c4c6d70","Type":"ContainerDied","Data":"7e01f0cfa39fbb29fb6066ee91a0912d27f792c109e76eff4b8bbf2fac04b9b4"} Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.002554 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e01f0cfa39fbb29fb6066ee91a0912d27f792c109e76eff4b8bbf2fac04b9b4" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.001284 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.276690 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h"] Nov 21 14:35:18 crc kubenswrapper[4904]: E1121 14:35:18.277123 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb481be-e58b-4b7a-be65-188c8c4c6d70" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.277144 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb481be-e58b-4b7a-be65-188c8c4c6d70" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.277368 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb481be-e58b-4b7a-be65-188c8c4c6d70" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.278453 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.280939 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.280954 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.280986 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.281017 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.290225 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h"] Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.291332 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.399786 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.399853 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnd9\" (UniqueName: \"kubernetes.io/projected/f292a2fb-beff-4c38-891a-db1e34c7157d-kube-api-access-crnd9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.399915 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.399941 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.502346 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.502454 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crnd9\" (UniqueName: \"kubernetes.io/projected/f292a2fb-beff-4c38-891a-db1e34c7157d-kube-api-access-crnd9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.502549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.502586 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.508195 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.514313 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.518278 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.526855 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crnd9\" (UniqueName: \"kubernetes.io/projected/f292a2fb-beff-4c38-891a-db1e34c7157d-kube-api-access-crnd9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-svg6h\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:18 crc kubenswrapper[4904]: I1121 14:35:18.600308 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:35:19 crc kubenswrapper[4904]: I1121 14:35:19.174544 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h"] Nov 21 14:35:20 crc kubenswrapper[4904]: I1121 14:35:20.027312 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" event={"ID":"f292a2fb-beff-4c38-891a-db1e34c7157d","Type":"ContainerStarted","Data":"cd96d4e9d994b6d7945aa17b04c28b0a5e7a041469afb8e88a37704f2a4ac9ef"} Nov 21 14:35:20 crc kubenswrapper[4904]: I1121 14:35:20.028120 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" event={"ID":"f292a2fb-beff-4c38-891a-db1e34c7157d","Type":"ContainerStarted","Data":"4aa9d9ed83e16203ec3cf73e093cb6a0b6d29c3ea4694ac8f610778a15d273ab"} Nov 21 14:35:20 crc kubenswrapper[4904]: I1121 14:35:20.052508 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" podStartSLOduration=1.615908363 podStartE2EDuration="2.052479415s" podCreationTimestamp="2025-11-21 14:35:18 +0000 UTC" firstStartedPulling="2025-11-21 14:35:19.183460429 +0000 UTC m=+3793.304992981" lastFinishedPulling="2025-11-21 14:35:19.620031471 +0000 UTC m=+3793.741564033" observedRunningTime="2025-11-21 14:35:20.049475401 +0000 UTC m=+3794.171007983" watchObservedRunningTime="2025-11-21 14:35:20.052479415 +0000 UTC m=+3794.174011987" Nov 21 14:35:29 crc kubenswrapper[4904]: I1121 14:35:29.514216 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:35:29 crc kubenswrapper[4904]: E1121 14:35:29.515275 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.046056 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7zzdg"] Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.049301 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.057306 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zzdg"] Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.140994 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-utilities\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.141091 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2sf\" (UniqueName: \"kubernetes.io/projected/a6b60e67-15a3-409d-86ae-b5013c8c1056-kube-api-access-mb2sf\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.141122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-catalog-content\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.243465 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-utilities\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.243551 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb2sf\" (UniqueName: \"kubernetes.io/projected/a6b60e67-15a3-409d-86ae-b5013c8c1056-kube-api-access-mb2sf\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.243578 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-catalog-content\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.244239 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-utilities\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.244267 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-catalog-content\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.279244 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb2sf\" (UniqueName: \"kubernetes.io/projected/a6b60e67-15a3-409d-86ae-b5013c8c1056-kube-api-access-mb2sf\") pod \"redhat-marketplace-7zzdg\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.397367 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:35 crc kubenswrapper[4904]: I1121 14:35:35.903421 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zzdg"] Nov 21 14:35:36 crc kubenswrapper[4904]: W1121 14:35:36.053795 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6b60e67_15a3_409d_86ae_b5013c8c1056.slice/crio-f60aa74bba36e29a7297e266a471531745210121862145fe50e393e3d3fd754b WatchSource:0}: Error finding container f60aa74bba36e29a7297e266a471531745210121862145fe50e393e3d3fd754b: Status 404 returned error can't find the container with id f60aa74bba36e29a7297e266a471531745210121862145fe50e393e3d3fd754b Nov 21 14:35:36 crc kubenswrapper[4904]: I1121 14:35:36.187385 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerStarted","Data":"f60aa74bba36e29a7297e266a471531745210121862145fe50e393e3d3fd754b"} Nov 21 14:35:37 crc kubenswrapper[4904]: I1121 14:35:37.200425 4904 generic.go:334] "Generic (PLEG): container finished" podID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerID="4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2" exitCode=0 Nov 21 14:35:37 crc kubenswrapper[4904]: I1121 14:35:37.200498 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerDied","Data":"4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2"} Nov 21 14:35:38 crc kubenswrapper[4904]: I1121 14:35:38.212908 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerStarted","Data":"81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742"} Nov 21 14:35:39 crc kubenswrapper[4904]: I1121 14:35:39.226506 4904 generic.go:334] "Generic (PLEG): container finished" podID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerID="81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742" exitCode=0 Nov 21 14:35:39 crc kubenswrapper[4904]: I1121 14:35:39.226571 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerDied","Data":"81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742"} Nov 21 14:35:40 crc kubenswrapper[4904]: I1121 14:35:40.239548 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerStarted","Data":"4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f"} Nov 21 14:35:40 crc kubenswrapper[4904]: I1121 14:35:40.261711 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7zzdg" podStartSLOduration=2.849185489 podStartE2EDuration="5.261695283s" podCreationTimestamp="2025-11-21 14:35:35 +0000 UTC" firstStartedPulling="2025-11-21 14:35:37.206424712 +0000 UTC m=+3811.327957274" lastFinishedPulling="2025-11-21 14:35:39.618934506 +0000 UTC m=+3813.740467068" observedRunningTime="2025-11-21 14:35:40.258508436 +0000 UTC m=+3814.380040998" watchObservedRunningTime="2025-11-21 14:35:40.261695283 +0000 UTC m=+3814.383227835" Nov 21 14:35:41 crc kubenswrapper[4904]: I1121 14:35:41.513773 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:35:41 crc kubenswrapper[4904]: E1121 14:35:41.514290 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:35:45 crc kubenswrapper[4904]: I1121 14:35:45.399411 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:45 crc kubenswrapper[4904]: I1121 14:35:45.400502 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:46 crc kubenswrapper[4904]: I1121 14:35:46.106886 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:46 crc kubenswrapper[4904]: I1121 14:35:46.387037 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:46 crc kubenswrapper[4904]: I1121 14:35:46.449254 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zzdg"] Nov 21 14:35:48 crc kubenswrapper[4904]: I1121 14:35:48.325164 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7zzdg" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="registry-server" containerID="cri-o://4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f" gracePeriod=2 Nov 21 14:35:48 crc kubenswrapper[4904]: I1121 14:35:48.881167 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.077556 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb2sf\" (UniqueName: \"kubernetes.io/projected/a6b60e67-15a3-409d-86ae-b5013c8c1056-kube-api-access-mb2sf\") pod \"a6b60e67-15a3-409d-86ae-b5013c8c1056\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.077628 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-catalog-content\") pod \"a6b60e67-15a3-409d-86ae-b5013c8c1056\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.077706 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-utilities\") pod \"a6b60e67-15a3-409d-86ae-b5013c8c1056\" (UID: \"a6b60e67-15a3-409d-86ae-b5013c8c1056\") " Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.079319 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-utilities" (OuterVolumeSpecName: "utilities") pod "a6b60e67-15a3-409d-86ae-b5013c8c1056" (UID: "a6b60e67-15a3-409d-86ae-b5013c8c1056"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.081032 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.087043 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b60e67-15a3-409d-86ae-b5013c8c1056-kube-api-access-mb2sf" (OuterVolumeSpecName: "kube-api-access-mb2sf") pod "a6b60e67-15a3-409d-86ae-b5013c8c1056" (UID: "a6b60e67-15a3-409d-86ae-b5013c8c1056"). InnerVolumeSpecName "kube-api-access-mb2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.100161 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6b60e67-15a3-409d-86ae-b5013c8c1056" (UID: "a6b60e67-15a3-409d-86ae-b5013c8c1056"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.183404 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb2sf\" (UniqueName: \"kubernetes.io/projected/a6b60e67-15a3-409d-86ae-b5013c8c1056-kube-api-access-mb2sf\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.183446 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6b60e67-15a3-409d-86ae-b5013c8c1056-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.338238 4904 generic.go:334] "Generic (PLEG): container finished" podID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerID="4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f" exitCode=0 Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.338309 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zzdg" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.338342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerDied","Data":"4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f"} Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.339009 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zzdg" event={"ID":"a6b60e67-15a3-409d-86ae-b5013c8c1056","Type":"ContainerDied","Data":"f60aa74bba36e29a7297e266a471531745210121862145fe50e393e3d3fd754b"} Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.339051 4904 scope.go:117] "RemoveContainer" containerID="4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.364898 4904 scope.go:117] "RemoveContainer" containerID="81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.381200 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zzdg"] Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.389955 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zzdg"] Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.410130 4904 scope.go:117] "RemoveContainer" containerID="4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.475932 4904 scope.go:117] "RemoveContainer" containerID="4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f" Nov 21 14:35:49 crc kubenswrapper[4904]: E1121 14:35:49.476370 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f\": container with ID starting with 4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f not found: ID does not exist" containerID="4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.476406 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f"} err="failed to get container status \"4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f\": rpc error: code = NotFound desc = could not find container \"4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f\": container with ID starting with 4d61eadd48ec773c39b81633c307a205d71f3f8bcde390299a693fba472b814f not found: ID does not exist" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.476435 4904 scope.go:117] "RemoveContainer" containerID="81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742" Nov 21 14:35:49 crc kubenswrapper[4904]: E1121 14:35:49.476825 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742\": container with ID starting with 81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742 not found: ID does not exist" containerID="81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.476857 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742"} err="failed to get container status \"81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742\": rpc error: code = NotFound desc = could not find container \"81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742\": container with ID starting with 81b64aa52a34d46b6a871d1aef4a08fe0f6626959b2fda7656245150550c6742 not found: ID does not exist" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.476878 4904 scope.go:117] "RemoveContainer" containerID="4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2" Nov 21 14:35:49 crc kubenswrapper[4904]: E1121 14:35:49.477227 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2\": container with ID starting with 4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2 not found: ID does not exist" containerID="4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2" Nov 21 14:35:49 crc kubenswrapper[4904]: I1121 14:35:49.477258 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2"} err="failed to get container status \"4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2\": rpc error: code = NotFound desc = could not find container \"4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2\": container with ID starting with 4cc16778abf65ea6d490d7e339bb78ace377eb50f4471c48af079513fff596d2 not found: ID does not exist" Nov 21 14:35:50 crc kubenswrapper[4904]: I1121 14:35:50.527061 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" path="/var/lib/kubelet/pods/a6b60e67-15a3-409d-86ae-b5013c8c1056/volumes" Nov 21 14:35:52 crc kubenswrapper[4904]: I1121 14:35:52.515144 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:35:52 crc kubenswrapper[4904]: E1121 14:35:52.517332 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:36:06 crc kubenswrapper[4904]: I1121 14:36:06.528643 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:36:06 crc kubenswrapper[4904]: E1121 14:36:06.529809 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:36:19 crc kubenswrapper[4904]: I1121 14:36:19.727518 4904 generic.go:334] "Generic (PLEG): container finished" podID="f292a2fb-beff-4c38-891a-db1e34c7157d" containerID="cd96d4e9d994b6d7945aa17b04c28b0a5e7a041469afb8e88a37704f2a4ac9ef" exitCode=0 Nov 21 14:36:19 crc kubenswrapper[4904]: I1121 14:36:19.727787 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" event={"ID":"f292a2fb-beff-4c38-891a-db1e34c7157d","Type":"ContainerDied","Data":"cd96d4e9d994b6d7945aa17b04c28b0a5e7a041469afb8e88a37704f2a4ac9ef"} Nov 21 14:36:20 crc kubenswrapper[4904]: I1121 14:36:20.515390 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:36:20 crc kubenswrapper[4904]: E1121 14:36:20.516475 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:36:21 crc kubenswrapper[4904]: I1121 14:36:21.897715 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.051935 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ceph\") pod \"f292a2fb-beff-4c38-891a-db1e34c7157d\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.052335 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-inventory\") pod \"f292a2fb-beff-4c38-891a-db1e34c7157d\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.052494 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crnd9\" (UniqueName: \"kubernetes.io/projected/f292a2fb-beff-4c38-891a-db1e34c7157d-kube-api-access-crnd9\") pod \"f292a2fb-beff-4c38-891a-db1e34c7157d\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.052526 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ssh-key\") pod \"f292a2fb-beff-4c38-891a-db1e34c7157d\" (UID: \"f292a2fb-beff-4c38-891a-db1e34c7157d\") " Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.066838 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ceph" (OuterVolumeSpecName: "ceph") pod "f292a2fb-beff-4c38-891a-db1e34c7157d" (UID: "f292a2fb-beff-4c38-891a-db1e34c7157d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.066954 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f292a2fb-beff-4c38-891a-db1e34c7157d-kube-api-access-crnd9" (OuterVolumeSpecName: "kube-api-access-crnd9") pod "f292a2fb-beff-4c38-891a-db1e34c7157d" (UID: "f292a2fb-beff-4c38-891a-db1e34c7157d"). InnerVolumeSpecName "kube-api-access-crnd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.069718 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lpsn9"] Nov 21 14:36:22 crc kubenswrapper[4904]: E1121 14:36:22.070258 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="registry-server" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.070278 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="registry-server" Nov 21 14:36:22 crc kubenswrapper[4904]: E1121 14:36:22.070305 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="extract-content" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.070311 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="extract-content" Nov 21 14:36:22 crc kubenswrapper[4904]: E1121 14:36:22.070326 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f292a2fb-beff-4c38-891a-db1e34c7157d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.070333 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f292a2fb-beff-4c38-891a-db1e34c7157d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:36:22 crc kubenswrapper[4904]: E1121 14:36:22.070358 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="extract-utilities" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.070365 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="extract-utilities" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.070604 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f292a2fb-beff-4c38-891a-db1e34c7157d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.070644 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6b60e67-15a3-409d-86ae-b5013c8c1056" containerName="registry-server" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.074110 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.112882 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f292a2fb-beff-4c38-891a-db1e34c7157d" (UID: "f292a2fb-beff-4c38-891a-db1e34c7157d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.113683 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpsn9"] Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.136715 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-inventory" (OuterVolumeSpecName: "inventory") pod "f292a2fb-beff-4c38-891a-db1e34c7157d" (UID: "f292a2fb-beff-4c38-891a-db1e34c7157d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.158754 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crnd9\" (UniqueName: \"kubernetes.io/projected/f292a2fb-beff-4c38-891a-db1e34c7157d-kube-api-access-crnd9\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.158974 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.159084 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.159147 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f292a2fb-beff-4c38-891a-db1e34c7157d-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.261379 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfd6\" (UniqueName: \"kubernetes.io/projected/7bd4a010-ffa6-418a-814b-de555a845af4-kube-api-access-5mfd6\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.261447 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-utilities\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.261530 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-catalog-content\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.364332 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-utilities\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.364790 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-catalog-content\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.365005 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-utilities\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.365180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfd6\" (UniqueName: \"kubernetes.io/projected/7bd4a010-ffa6-418a-814b-de555a845af4-kube-api-access-5mfd6\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.365378 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-catalog-content\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.389789 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfd6\" (UniqueName: \"kubernetes.io/projected/7bd4a010-ffa6-418a-814b-de555a845af4-kube-api-access-5mfd6\") pod \"redhat-operators-lpsn9\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.534841 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.771234 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" event={"ID":"f292a2fb-beff-4c38-891a-db1e34c7157d","Type":"ContainerDied","Data":"4aa9d9ed83e16203ec3cf73e093cb6a0b6d29c3ea4694ac8f610778a15d273ab"} Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.771807 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa9d9ed83e16203ec3cf73e093cb6a0b6d29c3ea4694ac8f610778a15d273ab" Nov 21 14:36:22 crc kubenswrapper[4904]: I1121 14:36:22.771309 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-svg6h" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.019645 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kzpz9"] Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.021712 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.024900 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.025047 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.025114 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.025200 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.025384 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.036169 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kzpz9"] Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.086823 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpsn9"] Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.183881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68mqt\" (UniqueName: \"kubernetes.io/projected/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-kube-api-access-68mqt\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.184562 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.184639 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.184734 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ceph\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.287855 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68mqt\" (UniqueName: \"kubernetes.io/projected/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-kube-api-access-68mqt\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.287946 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.288011 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.288278 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ceph\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.295339 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ceph\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.297968 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.299170 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.306306 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68mqt\" (UniqueName: \"kubernetes.io/projected/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-kube-api-access-68mqt\") pod \"ssh-known-hosts-edpm-deployment-kzpz9\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.342788 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.786536 4904 generic.go:334] "Generic (PLEG): container finished" podID="7bd4a010-ffa6-418a-814b-de555a845af4" containerID="9afba735ac6d244260e59ecb2abd04d8ebb9f7df3b3854272f45f6e93b2d4a90" exitCode=0 Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.786646 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerDied","Data":"9afba735ac6d244260e59ecb2abd04d8ebb9f7df3b3854272f45f6e93b2d4a90"} Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.787073 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerStarted","Data":"3ff798394211358f49fd27d4a8017c718514586de0fd4dcda0efca4f92d91fa9"} Nov 21 14:36:23 crc kubenswrapper[4904]: I1121 14:36:23.978006 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-kzpz9"] Nov 21 14:36:24 crc kubenswrapper[4904]: I1121 14:36:24.800963 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" event={"ID":"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5","Type":"ContainerStarted","Data":"fc03cbd2bbad783ed64b15a99a3d6ff4cc94580b55d8ee72f9936439d7c2f6b0"} Nov 21 14:36:25 crc kubenswrapper[4904]: I1121 14:36:25.815648 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" event={"ID":"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5","Type":"ContainerStarted","Data":"5c09ad4fa9b9dbb1b79fc14418fd6a32c6308d1ef44ea7e9b568a93003e7a3a5"} Nov 21 14:36:25 crc kubenswrapper[4904]: I1121 14:36:25.819111 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerStarted","Data":"3526a60263e560d57f1c3ace0df7d590a2d4ce99eeb49a37b1a25bc0f357c4b7"} Nov 21 14:36:25 crc kubenswrapper[4904]: I1121 14:36:25.842099 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" podStartSLOduration=3.281717066 podStartE2EDuration="3.842076441s" podCreationTimestamp="2025-11-21 14:36:22 +0000 UTC" firstStartedPulling="2025-11-21 14:36:24.167020633 +0000 UTC m=+3858.288553195" lastFinishedPulling="2025-11-21 14:36:24.727379998 +0000 UTC m=+3858.848912570" observedRunningTime="2025-11-21 14:36:25.835091523 +0000 UTC m=+3859.956624085" watchObservedRunningTime="2025-11-21 14:36:25.842076441 +0000 UTC m=+3859.963608993" Nov 21 14:36:30 crc kubenswrapper[4904]: I1121 14:36:30.875871 4904 generic.go:334] "Generic (PLEG): container finished" podID="7bd4a010-ffa6-418a-814b-de555a845af4" containerID="3526a60263e560d57f1c3ace0df7d590a2d4ce99eeb49a37b1a25bc0f357c4b7" exitCode=0 Nov 21 14:36:30 crc kubenswrapper[4904]: I1121 14:36:30.875949 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerDied","Data":"3526a60263e560d57f1c3ace0df7d590a2d4ce99eeb49a37b1a25bc0f357c4b7"} Nov 21 14:36:31 crc kubenswrapper[4904]: I1121 14:36:31.892342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerStarted","Data":"91c637e3a48f6a3fb2164406f1b52ee4c765e1475746408b55ac1ddc57500e8a"} Nov 21 14:36:31 crc kubenswrapper[4904]: I1121 14:36:31.917015 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lpsn9" podStartSLOduration=2.324809796 podStartE2EDuration="9.916992221s" podCreationTimestamp="2025-11-21 14:36:22 +0000 UTC" firstStartedPulling="2025-11-21 14:36:23.790063431 +0000 UTC m=+3857.911595983" lastFinishedPulling="2025-11-21 14:36:31.382245856 +0000 UTC m=+3865.503778408" observedRunningTime="2025-11-21 14:36:31.915332421 +0000 UTC m=+3866.036864983" watchObservedRunningTime="2025-11-21 14:36:31.916992221 +0000 UTC m=+3866.038524773" Nov 21 14:36:32 crc kubenswrapper[4904]: I1121 14:36:32.535806 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:32 crc kubenswrapper[4904]: I1121 14:36:32.535901 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:36:33 crc kubenswrapper[4904]: I1121 14:36:33.514132 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:36:33 crc kubenswrapper[4904]: E1121 14:36:33.514815 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:36:33 crc kubenswrapper[4904]: I1121 14:36:33.589846 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpsn9" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" probeResult="failure" output=< Nov 21 14:36:33 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:36:33 crc kubenswrapper[4904]: > Nov 21 14:36:40 crc kubenswrapper[4904]: I1121 14:36:40.996776 4904 generic.go:334] "Generic (PLEG): container finished" podID="52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" containerID="5c09ad4fa9b9dbb1b79fc14418fd6a32c6308d1ef44ea7e9b568a93003e7a3a5" exitCode=0 Nov 21 14:36:40 crc kubenswrapper[4904]: I1121 14:36:40.996935 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" event={"ID":"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5","Type":"ContainerDied","Data":"5c09ad4fa9b9dbb1b79fc14418fd6a32c6308d1ef44ea7e9b568a93003e7a3a5"} Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.502700 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.615612 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68mqt\" (UniqueName: \"kubernetes.io/projected/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-kube-api-access-68mqt\") pod \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.615968 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ceph\") pod \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.615995 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-inventory-0\") pod \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.616022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ssh-key-openstack-edpm-ipam\") pod \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\" (UID: \"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5\") " Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.622664 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ceph" (OuterVolumeSpecName: "ceph") pod "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" (UID: "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.637150 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-kube-api-access-68mqt" (OuterVolumeSpecName: "kube-api-access-68mqt") pod "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" (UID: "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5"). InnerVolumeSpecName "kube-api-access-68mqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.652947 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" (UID: "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.654603 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" (UID: "52c0e44a-c467-4a11-a4f9-21f59b8cd3c5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.720086 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68mqt\" (UniqueName: \"kubernetes.io/projected/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-kube-api-access-68mqt\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.720137 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.720154 4904 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:42 crc kubenswrapper[4904]: I1121 14:36:42.720168 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/52c0e44a-c467-4a11-a4f9-21f59b8cd3c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.021727 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" event={"ID":"52c0e44a-c467-4a11-a4f9-21f59b8cd3c5","Type":"ContainerDied","Data":"fc03cbd2bbad783ed64b15a99a3d6ff4cc94580b55d8ee72f9936439d7c2f6b0"} Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.021791 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc03cbd2bbad783ed64b15a99a3d6ff4cc94580b55d8ee72f9936439d7c2f6b0" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.021859 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-kzpz9" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.098063 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf"] Nov 21 14:36:43 crc kubenswrapper[4904]: E1121 14:36:43.098625 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.098644 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.098891 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="52c0e44a-c467-4a11-a4f9-21f59b8cd3c5" containerName="ssh-known-hosts-edpm-deployment" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.099771 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.102347 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.102438 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.102785 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.103359 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.104603 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.110315 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf"] Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.230950 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkzj\" (UniqueName: \"kubernetes.io/projected/b17ec983-5d7e-4e15-807e-393999d4aa0e-kube-api-access-mkkzj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.231024 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.231106 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.231260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.332969 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.333134 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkzj\" (UniqueName: \"kubernetes.io/projected/b17ec983-5d7e-4e15-807e-393999d4aa0e-kube-api-access-mkkzj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.333206 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.333273 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.336765 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.336871 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.338258 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.349647 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkzj\" (UniqueName: \"kubernetes.io/projected/b17ec983-5d7e-4e15-807e-393999d4aa0e-kube-api-access-mkkzj\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8plpf\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.422066 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.588159 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpsn9" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" probeResult="failure" output=< Nov 21 14:36:43 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:36:43 crc kubenswrapper[4904]: > Nov 21 14:36:43 crc kubenswrapper[4904]: I1121 14:36:43.988589 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf"] Nov 21 14:36:44 crc kubenswrapper[4904]: I1121 14:36:44.036235 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" event={"ID":"b17ec983-5d7e-4e15-807e-393999d4aa0e","Type":"ContainerStarted","Data":"dacd85f60ce51589c3650f5d0d184f72833515a2369fe795c8cc4b7b5165b771"} Nov 21 14:36:45 crc kubenswrapper[4904]: I1121 14:36:45.047952 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" event={"ID":"b17ec983-5d7e-4e15-807e-393999d4aa0e","Type":"ContainerStarted","Data":"3f776829db022672c77d2deecf1a108e080c45cd1c42cd28518cc361c68298ff"} Nov 21 14:36:45 crc kubenswrapper[4904]: I1121 14:36:45.074324 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" podStartSLOduration=1.646827491 podStartE2EDuration="2.074301854s" podCreationTimestamp="2025-11-21 14:36:43 +0000 UTC" firstStartedPulling="2025-11-21 14:36:43.995842866 +0000 UTC m=+3878.117375418" lastFinishedPulling="2025-11-21 14:36:44.423317239 +0000 UTC m=+3878.544849781" observedRunningTime="2025-11-21 14:36:45.064701683 +0000 UTC m=+3879.186234235" watchObservedRunningTime="2025-11-21 14:36:45.074301854 +0000 UTC m=+3879.195834426" Nov 21 14:36:47 crc kubenswrapper[4904]: I1121 14:36:47.514128 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:36:47 crc kubenswrapper[4904]: E1121 14:36:47.515043 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:36:53 crc kubenswrapper[4904]: I1121 14:36:53.582091 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpsn9" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" probeResult="failure" output=< Nov 21 14:36:53 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:36:53 crc kubenswrapper[4904]: > Nov 21 14:36:55 crc kubenswrapper[4904]: I1121 14:36:55.162238 4904 generic.go:334] "Generic (PLEG): container finished" podID="b17ec983-5d7e-4e15-807e-393999d4aa0e" containerID="3f776829db022672c77d2deecf1a108e080c45cd1c42cd28518cc361c68298ff" exitCode=0 Nov 21 14:36:55 crc kubenswrapper[4904]: I1121 14:36:55.162355 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" event={"ID":"b17ec983-5d7e-4e15-807e-393999d4aa0e","Type":"ContainerDied","Data":"3f776829db022672c77d2deecf1a108e080c45cd1c42cd28518cc361c68298ff"} Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.657990 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.758399 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ssh-key\") pod \"b17ec983-5d7e-4e15-807e-393999d4aa0e\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.759044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkzj\" (UniqueName: \"kubernetes.io/projected/b17ec983-5d7e-4e15-807e-393999d4aa0e-kube-api-access-mkkzj\") pod \"b17ec983-5d7e-4e15-807e-393999d4aa0e\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.759340 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-inventory\") pod \"b17ec983-5d7e-4e15-807e-393999d4aa0e\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.759619 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ceph\") pod \"b17ec983-5d7e-4e15-807e-393999d4aa0e\" (UID: \"b17ec983-5d7e-4e15-807e-393999d4aa0e\") " Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.763308 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17ec983-5d7e-4e15-807e-393999d4aa0e-kube-api-access-mkkzj" (OuterVolumeSpecName: "kube-api-access-mkkzj") pod "b17ec983-5d7e-4e15-807e-393999d4aa0e" (UID: "b17ec983-5d7e-4e15-807e-393999d4aa0e"). InnerVolumeSpecName "kube-api-access-mkkzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.764200 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ceph" (OuterVolumeSpecName: "ceph") pod "b17ec983-5d7e-4e15-807e-393999d4aa0e" (UID: "b17ec983-5d7e-4e15-807e-393999d4aa0e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.789947 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-inventory" (OuterVolumeSpecName: "inventory") pod "b17ec983-5d7e-4e15-807e-393999d4aa0e" (UID: "b17ec983-5d7e-4e15-807e-393999d4aa0e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.791140 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b17ec983-5d7e-4e15-807e-393999d4aa0e" (UID: "b17ec983-5d7e-4e15-807e-393999d4aa0e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.862985 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.863154 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.863236 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b17ec983-5d7e-4e15-807e-393999d4aa0e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:56 crc kubenswrapper[4904]: I1121 14:36:56.863315 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkkzj\" (UniqueName: \"kubernetes.io/projected/b17ec983-5d7e-4e15-807e-393999d4aa0e-kube-api-access-mkkzj\") on node \"crc\" DevicePath \"\"" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.190020 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" event={"ID":"b17ec983-5d7e-4e15-807e-393999d4aa0e","Type":"ContainerDied","Data":"dacd85f60ce51589c3650f5d0d184f72833515a2369fe795c8cc4b7b5165b771"} Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.190066 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dacd85f60ce51589c3650f5d0d184f72833515a2369fe795c8cc4b7b5165b771" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.190077 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8plpf" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.295950 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t"] Nov 21 14:36:57 crc kubenswrapper[4904]: E1121 14:36:57.296473 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17ec983-5d7e-4e15-807e-393999d4aa0e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.296494 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17ec983-5d7e-4e15-807e-393999d4aa0e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.296798 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17ec983-5d7e-4e15-807e-393999d4aa0e" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.297577 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.300043 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.300894 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.301586 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.302101 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.304081 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.312848 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t"] Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.383469 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.383561 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvnv\" (UniqueName: \"kubernetes.io/projected/facb16e4-106b-43c2-a62c-92103c2137ee-kube-api-access-lmvnv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.383598 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.383720 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.485518 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.485687 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.485761 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvnv\" (UniqueName: \"kubernetes.io/projected/facb16e4-106b-43c2-a62c-92103c2137ee-kube-api-access-lmvnv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.485801 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.490643 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.493121 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.493228 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.501771 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvnv\" (UniqueName: \"kubernetes.io/projected/facb16e4-106b-43c2-a62c-92103c2137ee-kube-api-access-lmvnv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:57 crc kubenswrapper[4904]: I1121 14:36:57.614548 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:36:58 crc kubenswrapper[4904]: I1121 14:36:58.243138 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t"] Nov 21 14:36:58 crc kubenswrapper[4904]: W1121 14:36:58.252541 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfacb16e4_106b_43c2_a62c_92103c2137ee.slice/crio-ca642907e8d78e3e42ef4bc280dd33948bfd709fd20a09e5054d03984dd13e1b WatchSource:0}: Error finding container ca642907e8d78e3e42ef4bc280dd33948bfd709fd20a09e5054d03984dd13e1b: Status 404 returned error can't find the container with id ca642907e8d78e3e42ef4bc280dd33948bfd709fd20a09e5054d03984dd13e1b Nov 21 14:36:59 crc kubenswrapper[4904]: I1121 14:36:59.215004 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" event={"ID":"facb16e4-106b-43c2-a62c-92103c2137ee","Type":"ContainerStarted","Data":"ca642907e8d78e3e42ef4bc280dd33948bfd709fd20a09e5054d03984dd13e1b"} Nov 21 14:37:00 crc kubenswrapper[4904]: I1121 14:37:00.513722 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:37:03 crc kubenswrapper[4904]: I1121 14:37:03.602524 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpsn9" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" probeResult="failure" output=< Nov 21 14:37:03 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:37:03 crc kubenswrapper[4904]: > Nov 21 14:37:05 crc kubenswrapper[4904]: I1121 14:37:05.754792 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 21 14:37:08 crc kubenswrapper[4904]: I1121 14:37:08.993599 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:37:09 crc kubenswrapper[4904]: I1121 14:37:09.356259 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"64ae5b3e8909be4ed6a624710ffc81ffb8957c8ae45fd8e8355f9a26c1c1eaf5"} Nov 21 14:37:09 crc kubenswrapper[4904]: I1121 14:37:09.358530 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" event={"ID":"facb16e4-106b-43c2-a62c-92103c2137ee","Type":"ContainerStarted","Data":"c188d85fda27c3904a755c6ca740614adcb5beff95f11c7c2dbec13d478c04aa"} Nov 21 14:37:09 crc kubenswrapper[4904]: I1121 14:37:09.401638 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" podStartSLOduration=1.6667638550000001 podStartE2EDuration="12.401610134s" podCreationTimestamp="2025-11-21 14:36:57 +0000 UTC" firstStartedPulling="2025-11-21 14:36:58.256060598 +0000 UTC m=+3892.377593160" lastFinishedPulling="2025-11-21 14:37:08.990906887 +0000 UTC m=+3903.112439439" observedRunningTime="2025-11-21 14:37:09.400131338 +0000 UTC m=+3903.521663900" watchObservedRunningTime="2025-11-21 14:37:09.401610134 +0000 UTC m=+3903.523142716" Nov 21 14:37:12 crc kubenswrapper[4904]: I1121 14:37:12.620930 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:37:12 crc kubenswrapper[4904]: I1121 14:37:12.684268 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:37:12 crc kubenswrapper[4904]: I1121 14:37:12.869066 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpsn9"] Nov 21 14:37:14 crc kubenswrapper[4904]: I1121 14:37:14.424772 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpsn9" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" containerID="cri-o://91c637e3a48f6a3fb2164406f1b52ee4c765e1475746408b55ac1ddc57500e8a" gracePeriod=2 Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.438301 4904 generic.go:334] "Generic (PLEG): container finished" podID="7bd4a010-ffa6-418a-814b-de555a845af4" containerID="91c637e3a48f6a3fb2164406f1b52ee4c765e1475746408b55ac1ddc57500e8a" exitCode=0 Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.438364 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerDied","Data":"91c637e3a48f6a3fb2164406f1b52ee4c765e1475746408b55ac1ddc57500e8a"} Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.439021 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpsn9" event={"ID":"7bd4a010-ffa6-418a-814b-de555a845af4","Type":"ContainerDied","Data":"3ff798394211358f49fd27d4a8017c718514586de0fd4dcda0efca4f92d91fa9"} Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.439043 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ff798394211358f49fd27d4a8017c718514586de0fd4dcda0efca4f92d91fa9" Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.516739 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.692238 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mfd6\" (UniqueName: \"kubernetes.io/projected/7bd4a010-ffa6-418a-814b-de555a845af4-kube-api-access-5mfd6\") pod \"7bd4a010-ffa6-418a-814b-de555a845af4\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.692504 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-utilities\") pod \"7bd4a010-ffa6-418a-814b-de555a845af4\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.692618 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-catalog-content\") pod \"7bd4a010-ffa6-418a-814b-de555a845af4\" (UID: \"7bd4a010-ffa6-418a-814b-de555a845af4\") " Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.693557 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-utilities" (OuterVolumeSpecName: "utilities") pod "7bd4a010-ffa6-418a-814b-de555a845af4" (UID: "7bd4a010-ffa6-418a-814b-de555a845af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.789173 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bd4a010-ffa6-418a-814b-de555a845af4" (UID: "7bd4a010-ffa6-418a-814b-de555a845af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.795186 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:15 crc kubenswrapper[4904]: I1121 14:37:15.795222 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd4a010-ffa6-418a-814b-de555a845af4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:16 crc kubenswrapper[4904]: I1121 14:37:16.253028 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd4a010-ffa6-418a-814b-de555a845af4-kube-api-access-5mfd6" (OuterVolumeSpecName: "kube-api-access-5mfd6") pod "7bd4a010-ffa6-418a-814b-de555a845af4" (UID: "7bd4a010-ffa6-418a-814b-de555a845af4"). InnerVolumeSpecName "kube-api-access-5mfd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:37:16 crc kubenswrapper[4904]: I1121 14:37:16.307789 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mfd6\" (UniqueName: \"kubernetes.io/projected/7bd4a010-ffa6-418a-814b-de555a845af4-kube-api-access-5mfd6\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:16 crc kubenswrapper[4904]: I1121 14:37:16.449610 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpsn9" Nov 21 14:37:16 crc kubenswrapper[4904]: I1121 14:37:16.496777 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpsn9"] Nov 21 14:37:16 crc kubenswrapper[4904]: I1121 14:37:16.509818 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lpsn9"] Nov 21 14:37:16 crc kubenswrapper[4904]: I1121 14:37:16.530140 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" path="/var/lib/kubelet/pods/7bd4a010-ffa6-418a-814b-de555a845af4/volumes" Nov 21 14:37:22 crc kubenswrapper[4904]: I1121 14:37:22.516870 4904 generic.go:334] "Generic (PLEG): container finished" podID="facb16e4-106b-43c2-a62c-92103c2137ee" containerID="c188d85fda27c3904a755c6ca740614adcb5beff95f11c7c2dbec13d478c04aa" exitCode=0 Nov 21 14:37:22 crc kubenswrapper[4904]: I1121 14:37:22.526736 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" event={"ID":"facb16e4-106b-43c2-a62c-92103c2137ee","Type":"ContainerDied","Data":"c188d85fda27c3904a755c6ca740614adcb5beff95f11c7c2dbec13d478c04aa"} Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.022995 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.100942 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmvnv\" (UniqueName: \"kubernetes.io/projected/facb16e4-106b-43c2-a62c-92103c2137ee-kube-api-access-lmvnv\") pod \"facb16e4-106b-43c2-a62c-92103c2137ee\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.101122 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ssh-key\") pod \"facb16e4-106b-43c2-a62c-92103c2137ee\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.101402 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-inventory\") pod \"facb16e4-106b-43c2-a62c-92103c2137ee\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.101480 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ceph\") pod \"facb16e4-106b-43c2-a62c-92103c2137ee\" (UID: \"facb16e4-106b-43c2-a62c-92103c2137ee\") " Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.110343 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/facb16e4-106b-43c2-a62c-92103c2137ee-kube-api-access-lmvnv" (OuterVolumeSpecName: "kube-api-access-lmvnv") pod "facb16e4-106b-43c2-a62c-92103c2137ee" (UID: "facb16e4-106b-43c2-a62c-92103c2137ee"). InnerVolumeSpecName "kube-api-access-lmvnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.110389 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ceph" (OuterVolumeSpecName: "ceph") pod "facb16e4-106b-43c2-a62c-92103c2137ee" (UID: "facb16e4-106b-43c2-a62c-92103c2137ee"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.137028 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "facb16e4-106b-43c2-a62c-92103c2137ee" (UID: "facb16e4-106b-43c2-a62c-92103c2137ee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.147866 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-inventory" (OuterVolumeSpecName: "inventory") pod "facb16e4-106b-43c2-a62c-92103c2137ee" (UID: "facb16e4-106b-43c2-a62c-92103c2137ee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.205788 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmvnv\" (UniqueName: \"kubernetes.io/projected/facb16e4-106b-43c2-a62c-92103c2137ee-kube-api-access-lmvnv\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.205921 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.206005 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.206043 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/facb16e4-106b-43c2-a62c-92103c2137ee-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.546331 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" event={"ID":"facb16e4-106b-43c2-a62c-92103c2137ee","Type":"ContainerDied","Data":"ca642907e8d78e3e42ef4bc280dd33948bfd709fd20a09e5054d03984dd13e1b"} Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.546709 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca642907e8d78e3e42ef4bc280dd33948bfd709fd20a09e5054d03984dd13e1b" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.546802 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.669352 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx"] Nov 21 14:37:24 crc kubenswrapper[4904]: E1121 14:37:24.670100 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="extract-content" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.670126 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="extract-content" Nov 21 14:37:24 crc kubenswrapper[4904]: E1121 14:37:24.670141 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.670148 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" Nov 21 14:37:24 crc kubenswrapper[4904]: E1121 14:37:24.670161 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="extract-utilities" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.670167 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="extract-utilities" Nov 21 14:37:24 crc kubenswrapper[4904]: E1121 14:37:24.670179 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facb16e4-106b-43c2-a62c-92103c2137ee" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.670186 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="facb16e4-106b-43c2-a62c-92103c2137ee" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.670495 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd4a010-ffa6-418a-814b-de555a845af4" containerName="registry-server" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.670534 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="facb16e4-106b-43c2-a62c-92103c2137ee" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.671510 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.677052 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.677723 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.678997 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680160 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680243 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680279 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680436 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680464 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680508 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.680995 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.684455 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx"] Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824256 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824346 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824494 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824571 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824829 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824868 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824908 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.824993 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825048 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825180 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825317 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825646 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnfwx\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-kube-api-access-gnfwx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825733 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.825861 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.927790 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.927861 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.927908 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.927941 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.927959 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.927981 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928007 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928034 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928058 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928090 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928118 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928155 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928190 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928235 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928273 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnfwx\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-kube-api-access-gnfwx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928298 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.928343 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.934890 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.935055 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.935989 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.936410 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.936854 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.937598 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.937922 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.937931 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.938236 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.938393 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.940104 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.940175 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.941208 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.942477 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.943042 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.947885 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.955052 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnfwx\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-kube-api-access-gnfwx\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:24 crc kubenswrapper[4904]: I1121 14:37:24.994618 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:37:25 crc kubenswrapper[4904]: I1121 14:37:25.615137 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx"] Nov 21 14:37:26 crc kubenswrapper[4904]: I1121 14:37:26.572941 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" event={"ID":"2e2f263f-26f6-4c10-b020-898f112d23d6","Type":"ContainerStarted","Data":"b4ce81a08ec092608616b93bb227eb9917e9e9618365461a24b2f5be39daf9b6"} Nov 21 14:37:27 crc kubenswrapper[4904]: I1121 14:37:27.588160 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" event={"ID":"2e2f263f-26f6-4c10-b020-898f112d23d6","Type":"ContainerStarted","Data":"a42cd73ad9950b8c87893e2d6c71de91b39f0594c1497b6703695272fca9cec2"} Nov 21 14:37:27 crc kubenswrapper[4904]: I1121 14:37:27.625029 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" podStartSLOduration=3.045263649 podStartE2EDuration="3.625003632s" podCreationTimestamp="2025-11-21 14:37:24 +0000 UTC" firstStartedPulling="2025-11-21 14:37:25.61886474 +0000 UTC m=+3919.740397292" lastFinishedPulling="2025-11-21 14:37:26.198604713 +0000 UTC m=+3920.320137275" observedRunningTime="2025-11-21 14:37:27.613023252 +0000 UTC m=+3921.734555894" watchObservedRunningTime="2025-11-21 14:37:27.625003632 +0000 UTC m=+3921.746536184" Nov 21 14:38:27 crc kubenswrapper[4904]: I1121 14:38:27.331789 4904 generic.go:334] "Generic (PLEG): container finished" podID="2e2f263f-26f6-4c10-b020-898f112d23d6" containerID="a42cd73ad9950b8c87893e2d6c71de91b39f0594c1497b6703695272fca9cec2" exitCode=0 Nov 21 14:38:27 crc kubenswrapper[4904]: I1121 14:38:27.331930 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" event={"ID":"2e2f263f-26f6-4c10-b020-898f112d23d6","Type":"ContainerDied","Data":"a42cd73ad9950b8c87893e2d6c71de91b39f0594c1497b6703695272fca9cec2"} Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.849320 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.905833 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ssh-key\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.905899 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-neutron-metadata-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.905951 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906031 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906059 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906102 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906142 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-repo-setup-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906194 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-power-monitoring-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906253 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-nova-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906317 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-libvirt-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ceph\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906399 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906426 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-bootstrap-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906466 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906544 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ovn-combined-ca-bundle\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906576 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-inventory\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.906634 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnfwx\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-kube-api-access-gnfwx\") pod \"2e2f263f-26f6-4c10-b020-898f112d23d6\" (UID: \"2e2f263f-26f6-4c10-b020-898f112d23d6\") " Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.920299 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.920321 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.920321 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.920440 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.920554 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.920342 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-kube-api-access-gnfwx" (OuterVolumeSpecName: "kube-api-access-gnfwx") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "kube-api-access-gnfwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.921297 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.921801 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.926725 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ceph" (OuterVolumeSpecName: "ceph") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.931871 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.932007 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.932781 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.933874 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.935753 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.940876 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.966998 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:28 crc kubenswrapper[4904]: I1121 14:38:28.998881 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-inventory" (OuterVolumeSpecName: "inventory") pod "2e2f263f-26f6-4c10-b020-898f112d23d6" (UID: "2e2f263f-26f6-4c10-b020-898f112d23d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010149 4904 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010201 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010218 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010236 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010252 4904 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010267 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010284 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010296 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010308 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnfwx\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-kube-api-access-gnfwx\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010318 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010330 4904 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010349 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010363 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010378 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2e2f263f-26f6-4c10-b020-898f112d23d6-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010392 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010405 4904 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.010418 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2f263f-26f6-4c10-b020-898f112d23d6-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.361448 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" event={"ID":"2e2f263f-26f6-4c10-b020-898f112d23d6","Type":"ContainerDied","Data":"b4ce81a08ec092608616b93bb227eb9917e9e9618365461a24b2f5be39daf9b6"} Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.361512 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ce81a08ec092608616b93bb227eb9917e9e9618365461a24b2f5be39daf9b6" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.361584 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.518686 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb"] Nov 21 14:38:29 crc kubenswrapper[4904]: E1121 14:38:29.519973 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2f263f-26f6-4c10-b020-898f112d23d6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.520002 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2f263f-26f6-4c10-b020-898f112d23d6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.520317 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2f263f-26f6-4c10-b020-898f112d23d6" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.521727 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.524616 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.524855 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.525027 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.525038 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.525097 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.545954 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb"] Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.632550 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.632740 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.632796 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2k4r\" (UniqueName: \"kubernetes.io/projected/e446baad-37a3-4206-a40a-67ac35889d21-kube-api-access-r2k4r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.632858 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.734938 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.735041 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2k4r\" (UniqueName: \"kubernetes.io/projected/e446baad-37a3-4206-a40a-67ac35889d21-kube-api-access-r2k4r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.735111 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.735196 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.741638 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.742908 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.749997 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.762232 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2k4r\" (UniqueName: \"kubernetes.io/projected/e446baad-37a3-4206-a40a-67ac35889d21-kube-api-access-r2k4r\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:29 crc kubenswrapper[4904]: I1121 14:38:29.841909 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:30 crc kubenswrapper[4904]: I1121 14:38:30.525682 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb"] Nov 21 14:38:30 crc kubenswrapper[4904]: I1121 14:38:30.760935 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:38:31 crc kubenswrapper[4904]: I1121 14:38:31.385417 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" event={"ID":"e446baad-37a3-4206-a40a-67ac35889d21","Type":"ContainerStarted","Data":"117d1be941b875d7df8e8a872462d9ec21bf5c2fd94b8744842254ad6d7c2620"} Nov 21 14:38:32 crc kubenswrapper[4904]: I1121 14:38:32.927583 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cp5vh"] Nov 21 14:38:32 crc kubenswrapper[4904]: I1121 14:38:32.934406 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:32 crc kubenswrapper[4904]: I1121 14:38:32.937936 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cp5vh"] Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.062845 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-utilities\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.063017 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpmth\" (UniqueName: \"kubernetes.io/projected/89a76037-23b7-4e2f-8610-5edefc76d041-kube-api-access-zpmth\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.063231 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-catalog-content\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.165565 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-utilities\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.165645 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpmth\" (UniqueName: \"kubernetes.io/projected/89a76037-23b7-4e2f-8610-5edefc76d041-kube-api-access-zpmth\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.165722 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-catalog-content\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.166154 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-utilities\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.166209 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-catalog-content\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.201037 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpmth\" (UniqueName: \"kubernetes.io/projected/89a76037-23b7-4e2f-8610-5edefc76d041-kube-api-access-zpmth\") pod \"certified-operators-cp5vh\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.273825 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.447862 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" event={"ID":"e446baad-37a3-4206-a40a-67ac35889d21","Type":"ContainerStarted","Data":"efefa57a1b3e3d3a21c52c848222fbc7d61d6ef8d86dd0673684097362743286"} Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.487691 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" podStartSLOduration=3.163172761 podStartE2EDuration="4.487635715s" podCreationTimestamp="2025-11-21 14:38:29 +0000 UTC" firstStartedPulling="2025-11-21 14:38:30.760487045 +0000 UTC m=+3984.882019597" lastFinishedPulling="2025-11-21 14:38:32.084949999 +0000 UTC m=+3986.206482551" observedRunningTime="2025-11-21 14:38:33.480107562 +0000 UTC m=+3987.601640114" watchObservedRunningTime="2025-11-21 14:38:33.487635715 +0000 UTC m=+3987.609168267" Nov 21 14:38:33 crc kubenswrapper[4904]: I1121 14:38:33.840893 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cp5vh"] Nov 21 14:38:34 crc kubenswrapper[4904]: I1121 14:38:34.461139 4904 generic.go:334] "Generic (PLEG): container finished" podID="89a76037-23b7-4e2f-8610-5edefc76d041" containerID="c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50" exitCode=0 Nov 21 14:38:34 crc kubenswrapper[4904]: I1121 14:38:34.461218 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerDied","Data":"c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50"} Nov 21 14:38:34 crc kubenswrapper[4904]: I1121 14:38:34.461728 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerStarted","Data":"b2ee84b81ed6a9df07e3a2c0f0258456b7a20a592db7faa3c89c02139215dcd5"} Nov 21 14:38:36 crc kubenswrapper[4904]: I1121 14:38:36.488800 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerStarted","Data":"4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293"} Nov 21 14:38:39 crc kubenswrapper[4904]: I1121 14:38:39.545381 4904 generic.go:334] "Generic (PLEG): container finished" podID="e446baad-37a3-4206-a40a-67ac35889d21" containerID="efefa57a1b3e3d3a21c52c848222fbc7d61d6ef8d86dd0673684097362743286" exitCode=0 Nov 21 14:38:39 crc kubenswrapper[4904]: I1121 14:38:39.546360 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" event={"ID":"e446baad-37a3-4206-a40a-67ac35889d21","Type":"ContainerDied","Data":"efefa57a1b3e3d3a21c52c848222fbc7d61d6ef8d86dd0673684097362743286"} Nov 21 14:38:39 crc kubenswrapper[4904]: I1121 14:38:39.564425 4904 generic.go:334] "Generic (PLEG): container finished" podID="89a76037-23b7-4e2f-8610-5edefc76d041" containerID="4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293" exitCode=0 Nov 21 14:38:39 crc kubenswrapper[4904]: I1121 14:38:39.564496 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerDied","Data":"4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293"} Nov 21 14:38:40 crc kubenswrapper[4904]: I1121 14:38:40.585010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerStarted","Data":"a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa"} Nov 21 14:38:40 crc kubenswrapper[4904]: I1121 14:38:40.624234 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cp5vh" podStartSLOduration=3.026737228 podStartE2EDuration="8.624212106s" podCreationTimestamp="2025-11-21 14:38:32 +0000 UTC" firstStartedPulling="2025-11-21 14:38:34.463917253 +0000 UTC m=+3988.585449795" lastFinishedPulling="2025-11-21 14:38:40.061392121 +0000 UTC m=+3994.182924673" observedRunningTime="2025-11-21 14:38:40.612637825 +0000 UTC m=+3994.734170377" watchObservedRunningTime="2025-11-21 14:38:40.624212106 +0000 UTC m=+3994.745744658" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.274816 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.415417 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ceph\") pod \"e446baad-37a3-4206-a40a-67ac35889d21\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.415501 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2k4r\" (UniqueName: \"kubernetes.io/projected/e446baad-37a3-4206-a40a-67ac35889d21-kube-api-access-r2k4r\") pod \"e446baad-37a3-4206-a40a-67ac35889d21\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.415525 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ssh-key\") pod \"e446baad-37a3-4206-a40a-67ac35889d21\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.415733 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-inventory\") pod \"e446baad-37a3-4206-a40a-67ac35889d21\" (UID: \"e446baad-37a3-4206-a40a-67ac35889d21\") " Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.449457 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ceph" (OuterVolumeSpecName: "ceph") pod "e446baad-37a3-4206-a40a-67ac35889d21" (UID: "e446baad-37a3-4206-a40a-67ac35889d21"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.449890 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e446baad-37a3-4206-a40a-67ac35889d21-kube-api-access-r2k4r" (OuterVolumeSpecName: "kube-api-access-r2k4r") pod "e446baad-37a3-4206-a40a-67ac35889d21" (UID: "e446baad-37a3-4206-a40a-67ac35889d21"). InnerVolumeSpecName "kube-api-access-r2k4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.465854 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e446baad-37a3-4206-a40a-67ac35889d21" (UID: "e446baad-37a3-4206-a40a-67ac35889d21"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.467522 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-inventory" (OuterVolumeSpecName: "inventory") pod "e446baad-37a3-4206-a40a-67ac35889d21" (UID: "e446baad-37a3-4206-a40a-67ac35889d21"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.518088 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.518124 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2k4r\" (UniqueName: \"kubernetes.io/projected/e446baad-37a3-4206-a40a-67ac35889d21-kube-api-access-r2k4r\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.518136 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.518144 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e446baad-37a3-4206-a40a-67ac35889d21-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.597259 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" event={"ID":"e446baad-37a3-4206-a40a-67ac35889d21","Type":"ContainerDied","Data":"117d1be941b875d7df8e8a872462d9ec21bf5c2fd94b8744842254ad6d7c2620"} Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.597606 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117d1be941b875d7df8e8a872462d9ec21bf5c2fd94b8744842254ad6d7c2620" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.597389 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.695186 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt"] Nov 21 14:38:41 crc kubenswrapper[4904]: E1121 14:38:41.696440 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e446baad-37a3-4206-a40a-67ac35889d21" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.696474 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e446baad-37a3-4206-a40a-67ac35889d21" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.696762 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e446baad-37a3-4206-a40a-67ac35889d21" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.698068 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.701521 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.701576 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.701798 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.701877 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.702052 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.706845 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.712398 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt"] Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.826691 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.826757 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.826874 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7qps\" (UniqueName: \"kubernetes.io/projected/c336a560-11e3-4740-b2f3-ebc5203fb0ad-kube-api-access-z7qps\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.826951 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.827045 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.827123 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.929505 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.929610 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.929644 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.929695 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7qps\" (UniqueName: \"kubernetes.io/projected/c336a560-11e3-4740-b2f3-ebc5203fb0ad-kube-api-access-z7qps\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.929770 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.929823 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.931804 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.936184 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.936341 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.936442 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.952593 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:41 crc kubenswrapper[4904]: I1121 14:38:41.953190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7qps\" (UniqueName: \"kubernetes.io/projected/c336a560-11e3-4740-b2f3-ebc5203fb0ad-kube-api-access-z7qps\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-qcxnt\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:42 crc kubenswrapper[4904]: I1121 14:38:42.026378 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:38:42 crc kubenswrapper[4904]: I1121 14:38:42.583367 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt"] Nov 21 14:38:42 crc kubenswrapper[4904]: W1121 14:38:42.590444 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc336a560_11e3_4740_b2f3_ebc5203fb0ad.slice/crio-8c95cc12975571a6947452c5fb63988a1486119fcc472ed5aa8ddf20fb41a6de WatchSource:0}: Error finding container 8c95cc12975571a6947452c5fb63988a1486119fcc472ed5aa8ddf20fb41a6de: Status 404 returned error can't find the container with id 8c95cc12975571a6947452c5fb63988a1486119fcc472ed5aa8ddf20fb41a6de Nov 21 14:38:42 crc kubenswrapper[4904]: I1121 14:38:42.616675 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" event={"ID":"c336a560-11e3-4740-b2f3-ebc5203fb0ad","Type":"ContainerStarted","Data":"8c95cc12975571a6947452c5fb63988a1486119fcc472ed5aa8ddf20fb41a6de"} Nov 21 14:38:43 crc kubenswrapper[4904]: I1121 14:38:43.274911 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:43 crc kubenswrapper[4904]: I1121 14:38:43.275032 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:43 crc kubenswrapper[4904]: I1121 14:38:43.692438 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:44 crc kubenswrapper[4904]: I1121 14:38:44.639759 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" event={"ID":"c336a560-11e3-4740-b2f3-ebc5203fb0ad","Type":"ContainerStarted","Data":"250c3c3e8171616e4f4c3524354ff22c69ab10e0269a2359f0ae1bbf4922e87a"} Nov 21 14:38:44 crc kubenswrapper[4904]: I1121 14:38:44.666441 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" podStartSLOduration=2.946074331 podStartE2EDuration="3.666416442s" podCreationTimestamp="2025-11-21 14:38:41 +0000 UTC" firstStartedPulling="2025-11-21 14:38:42.592850601 +0000 UTC m=+3996.714383153" lastFinishedPulling="2025-11-21 14:38:43.313192712 +0000 UTC m=+3997.434725264" observedRunningTime="2025-11-21 14:38:44.664244589 +0000 UTC m=+3998.785777141" watchObservedRunningTime="2025-11-21 14:38:44.666416442 +0000 UTC m=+3998.787948994" Nov 21 14:38:53 crc kubenswrapper[4904]: I1121 14:38:53.918894 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:53 crc kubenswrapper[4904]: I1121 14:38:53.989554 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cp5vh"] Nov 21 14:38:54 crc kubenswrapper[4904]: I1121 14:38:54.782785 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cp5vh" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="registry-server" containerID="cri-o://a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa" gracePeriod=2 Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.372471 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.387551 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-catalog-content\") pod \"89a76037-23b7-4e2f-8610-5edefc76d041\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.387854 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpmth\" (UniqueName: \"kubernetes.io/projected/89a76037-23b7-4e2f-8610-5edefc76d041-kube-api-access-zpmth\") pod \"89a76037-23b7-4e2f-8610-5edefc76d041\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.387944 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-utilities\") pod \"89a76037-23b7-4e2f-8610-5edefc76d041\" (UID: \"89a76037-23b7-4e2f-8610-5edefc76d041\") " Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.388926 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-utilities" (OuterVolumeSpecName: "utilities") pod "89a76037-23b7-4e2f-8610-5edefc76d041" (UID: "89a76037-23b7-4e2f-8610-5edefc76d041"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.399890 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a76037-23b7-4e2f-8610-5edefc76d041-kube-api-access-zpmth" (OuterVolumeSpecName: "kube-api-access-zpmth") pod "89a76037-23b7-4e2f-8610-5edefc76d041" (UID: "89a76037-23b7-4e2f-8610-5edefc76d041"). InnerVolumeSpecName "kube-api-access-zpmth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.466496 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89a76037-23b7-4e2f-8610-5edefc76d041" (UID: "89a76037-23b7-4e2f-8610-5edefc76d041"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.491364 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.491423 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpmth\" (UniqueName: \"kubernetes.io/projected/89a76037-23b7-4e2f-8610-5edefc76d041-kube-api-access-zpmth\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.491444 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a76037-23b7-4e2f-8610-5edefc76d041-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.799531 4904 generic.go:334] "Generic (PLEG): container finished" podID="89a76037-23b7-4e2f-8610-5edefc76d041" containerID="a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa" exitCode=0 Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.799593 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerDied","Data":"a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa"} Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.799620 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cp5vh" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.799645 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cp5vh" event={"ID":"89a76037-23b7-4e2f-8610-5edefc76d041","Type":"ContainerDied","Data":"b2ee84b81ed6a9df07e3a2c0f0258456b7a20a592db7faa3c89c02139215dcd5"} Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.799690 4904 scope.go:117] "RemoveContainer" containerID="a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.848543 4904 scope.go:117] "RemoveContainer" containerID="4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.849816 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cp5vh"] Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.860838 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cp5vh"] Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.881817 4904 scope.go:117] "RemoveContainer" containerID="c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.947678 4904 scope.go:117] "RemoveContainer" containerID="a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa" Nov 21 14:38:55 crc kubenswrapper[4904]: E1121 14:38:55.948445 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa\": container with ID starting with a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa not found: ID does not exist" containerID="a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.948520 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa"} err="failed to get container status \"a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa\": rpc error: code = NotFound desc = could not find container \"a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa\": container with ID starting with a1cd89a184566b96fab76c8884cf292f7919a6b8f87063594a8d6324d01145aa not found: ID does not exist" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.948577 4904 scope.go:117] "RemoveContainer" containerID="4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293" Nov 21 14:38:55 crc kubenswrapper[4904]: E1121 14:38:55.949127 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293\": container with ID starting with 4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293 not found: ID does not exist" containerID="4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.949182 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293"} err="failed to get container status \"4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293\": rpc error: code = NotFound desc = could not find container \"4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293\": container with ID starting with 4ba2605075b163f37db8bb54d81974f9904090c37ca02c3999977cf20cced293 not found: ID does not exist" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.949217 4904 scope.go:117] "RemoveContainer" containerID="c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50" Nov 21 14:38:55 crc kubenswrapper[4904]: E1121 14:38:55.949524 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50\": container with ID starting with c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50 not found: ID does not exist" containerID="c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50" Nov 21 14:38:55 crc kubenswrapper[4904]: I1121 14:38:55.949554 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50"} err="failed to get container status \"c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50\": rpc error: code = NotFound desc = could not find container \"c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50\": container with ID starting with c07d178bd6bcde4f6eda71bc0b31e8e82e7da1088d1faecbe1db2c47bd380b50 not found: ID does not exist" Nov 21 14:38:56 crc kubenswrapper[4904]: I1121 14:38:56.528325 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" path="/var/lib/kubelet/pods/89a76037-23b7-4e2f-8610-5edefc76d041/volumes" Nov 21 14:39:28 crc kubenswrapper[4904]: I1121 14:39:28.113258 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:39:28 crc kubenswrapper[4904]: I1121 14:39:28.113833 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:39:58 crc kubenswrapper[4904]: I1121 14:39:58.113689 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:39:58 crc kubenswrapper[4904]: I1121 14:39:58.114350 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:40:14 crc kubenswrapper[4904]: I1121 14:40:14.801165 4904 generic.go:334] "Generic (PLEG): container finished" podID="c336a560-11e3-4740-b2f3-ebc5203fb0ad" containerID="250c3c3e8171616e4f4c3524354ff22c69ab10e0269a2359f0ae1bbf4922e87a" exitCode=0 Nov 21 14:40:14 crc kubenswrapper[4904]: I1121 14:40:14.801304 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" event={"ID":"c336a560-11e3-4740-b2f3-ebc5203fb0ad","Type":"ContainerDied","Data":"250c3c3e8171616e4f4c3524354ff22c69ab10e0269a2359f0ae1bbf4922e87a"} Nov 21 14:40:16 crc kubenswrapper[4904]: I1121 14:40:16.872963 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.044956 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ssh-key\") pod \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.045359 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-inventory\") pod \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.045439 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7qps\" (UniqueName: \"kubernetes.io/projected/c336a560-11e3-4740-b2f3-ebc5203fb0ad-kube-api-access-z7qps\") pod \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.045508 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ceph\") pod \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.046549 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovncontroller-config-0\") pod \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.046626 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovn-combined-ca-bundle\") pod \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\" (UID: \"c336a560-11e3-4740-b2f3-ebc5203fb0ad\") " Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.053134 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c336a560-11e3-4740-b2f3-ebc5203fb0ad-kube-api-access-z7qps" (OuterVolumeSpecName: "kube-api-access-z7qps") pod "c336a560-11e3-4740-b2f3-ebc5203fb0ad" (UID: "c336a560-11e3-4740-b2f3-ebc5203fb0ad"). InnerVolumeSpecName "kube-api-access-z7qps". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.055110 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c336a560-11e3-4740-b2f3-ebc5203fb0ad" (UID: "c336a560-11e3-4740-b2f3-ebc5203fb0ad"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.058676 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ceph" (OuterVolumeSpecName: "ceph") pod "c336a560-11e3-4740-b2f3-ebc5203fb0ad" (UID: "c336a560-11e3-4740-b2f3-ebc5203fb0ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.082159 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c336a560-11e3-4740-b2f3-ebc5203fb0ad" (UID: "c336a560-11e3-4740-b2f3-ebc5203fb0ad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.086819 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c336a560-11e3-4740-b2f3-ebc5203fb0ad" (UID: "c336a560-11e3-4740-b2f3-ebc5203fb0ad"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.090612 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-inventory" (OuterVolumeSpecName: "inventory") pod "c336a560-11e3-4740-b2f3-ebc5203fb0ad" (UID: "c336a560-11e3-4740-b2f3-ebc5203fb0ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.150762 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7qps\" (UniqueName: \"kubernetes.io/projected/c336a560-11e3-4740-b2f3-ebc5203fb0ad-kube-api-access-z7qps\") on node \"crc\" DevicePath \"\"" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.150826 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.150840 4904 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.150857 4904 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.150871 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.150885 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c336a560-11e3-4740-b2f3-ebc5203fb0ad-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.271633 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" event={"ID":"c336a560-11e3-4740-b2f3-ebc5203fb0ad","Type":"ContainerDied","Data":"8c95cc12975571a6947452c5fb63988a1486119fcc472ed5aa8ddf20fb41a6de"} Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.271693 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c95cc12975571a6947452c5fb63988a1486119fcc472ed5aa8ddf20fb41a6de" Nov 21 14:40:17 crc kubenswrapper[4904]: I1121 14:40:17.271768 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-qcxnt" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.014312 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq"] Nov 21 14:40:18 crc kubenswrapper[4904]: E1121 14:40:18.015416 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="extract-utilities" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.015433 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="extract-utilities" Nov 21 14:40:18 crc kubenswrapper[4904]: E1121 14:40:18.015456 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="extract-content" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.015462 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="extract-content" Nov 21 14:40:18 crc kubenswrapper[4904]: E1121 14:40:18.015474 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="registry-server" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.015481 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="registry-server" Nov 21 14:40:18 crc kubenswrapper[4904]: E1121 14:40:18.015495 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c336a560-11e3-4740-b2f3-ebc5203fb0ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.015505 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="c336a560-11e3-4740-b2f3-ebc5203fb0ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.015770 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="c336a560-11e3-4740-b2f3-ebc5203fb0ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.015797 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a76037-23b7-4e2f-8610-5edefc76d041" containerName="registry-server" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.016732 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.026896 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.027023 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.027085 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.027262 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.027274 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.027404 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.027509 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.029600 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq"] Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.182922 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.182978 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.183018 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.183043 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.183118 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.183144 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wxz\" (UniqueName: \"kubernetes.io/projected/7cc82b71-e69b-4404-843c-afdc4b449ab4-kube-api-access-t2wxz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.183372 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287203 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287277 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wxz\" (UniqueName: \"kubernetes.io/projected/7cc82b71-e69b-4404-843c-afdc4b449ab4-kube-api-access-t2wxz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287307 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287754 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287805 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287880 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.287921 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.293518 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.294345 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.294648 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.299436 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.299531 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.300029 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.309999 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wxz\" (UniqueName: \"kubernetes.io/projected/7cc82b71-e69b-4404-843c-afdc4b449ab4-kube-api-access-t2wxz\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.345533 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:40:18 crc kubenswrapper[4904]: I1121 14:40:18.968683 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq"] Nov 21 14:40:19 crc kubenswrapper[4904]: I1121 14:40:19.291323 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" event={"ID":"7cc82b71-e69b-4404-843c-afdc4b449ab4","Type":"ContainerStarted","Data":"fdee80e9943e158d403dc29b9b1e94205b95ba3a456bb6ad86d009976b38d3e5"} Nov 21 14:40:20 crc kubenswrapper[4904]: I1121 14:40:20.303885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" event={"ID":"7cc82b71-e69b-4404-843c-afdc4b449ab4","Type":"ContainerStarted","Data":"160aea4cd7f625c6199bd4ddfb735cd79cd0db9d4cbae9e36067e0982e35b0b2"} Nov 21 14:40:20 crc kubenswrapper[4904]: I1121 14:40:20.330998 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" podStartSLOduration=2.847732098 podStartE2EDuration="3.330974059s" podCreationTimestamp="2025-11-21 14:40:17 +0000 UTC" firstStartedPulling="2025-11-21 14:40:18.978243922 +0000 UTC m=+4093.099776464" lastFinishedPulling="2025-11-21 14:40:19.461485873 +0000 UTC m=+4093.583018425" observedRunningTime="2025-11-21 14:40:20.32150592 +0000 UTC m=+4094.443038492" watchObservedRunningTime="2025-11-21 14:40:20.330974059 +0000 UTC m=+4094.452506621" Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.113306 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.114166 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.114229 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.115293 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64ae5b3e8909be4ed6a624710ffc81ffb8957c8ae45fd8e8355f9a26c1c1eaf5"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.115360 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://64ae5b3e8909be4ed6a624710ffc81ffb8957c8ae45fd8e8355f9a26c1c1eaf5" gracePeriod=600 Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.392533 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="64ae5b3e8909be4ed6a624710ffc81ffb8957c8ae45fd8e8355f9a26c1c1eaf5" exitCode=0 Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.392610 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"64ae5b3e8909be4ed6a624710ffc81ffb8957c8ae45fd8e8355f9a26c1c1eaf5"} Nov 21 14:40:28 crc kubenswrapper[4904]: I1121 14:40:28.392708 4904 scope.go:117] "RemoveContainer" containerID="4f1ee75ae69867d30ae23c7be33b9bd5fd2d659945672a61c1cc87834eb3fd19" Nov 21 14:40:29 crc kubenswrapper[4904]: I1121 14:40:29.404994 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7"} Nov 21 14:42:01 crc kubenswrapper[4904]: I1121 14:42:01.687528 4904 generic.go:334] "Generic (PLEG): container finished" podID="7cc82b71-e69b-4404-843c-afdc4b449ab4" containerID="160aea4cd7f625c6199bd4ddfb735cd79cd0db9d4cbae9e36067e0982e35b0b2" exitCode=0 Nov 21 14:42:01 crc kubenswrapper[4904]: I1121 14:42:01.687678 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" event={"ID":"7cc82b71-e69b-4404-843c-afdc4b449ab4","Type":"ContainerDied","Data":"160aea4cd7f625c6199bd4ddfb735cd79cd0db9d4cbae9e36067e0982e35b0b2"} Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.337006 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.414374 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-nova-metadata-neutron-config-0\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.414564 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ssh-key\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.414608 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-inventory\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.414838 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.414964 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ceph\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.415037 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-metadata-combined-ca-bundle\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.415086 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2wxz\" (UniqueName: \"kubernetes.io/projected/7cc82b71-e69b-4404-843c-afdc4b449ab4-kube-api-access-t2wxz\") pod \"7cc82b71-e69b-4404-843c-afdc4b449ab4\" (UID: \"7cc82b71-e69b-4404-843c-afdc4b449ab4\") " Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.438057 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc82b71-e69b-4404-843c-afdc4b449ab4-kube-api-access-t2wxz" (OuterVolumeSpecName: "kube-api-access-t2wxz") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "kube-api-access-t2wxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.438551 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ceph" (OuterVolumeSpecName: "ceph") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.442857 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.453698 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-inventory" (OuterVolumeSpecName: "inventory") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.454093 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.464341 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.471057 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7cc82b71-e69b-4404-843c-afdc4b449ab4" (UID: "7cc82b71-e69b-4404-843c-afdc4b449ab4"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518607 4904 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518702 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518717 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518732 4904 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518746 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518758 4904 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc82b71-e69b-4404-843c-afdc4b449ab4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.518772 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2wxz\" (UniqueName: \"kubernetes.io/projected/7cc82b71-e69b-4404-843c-afdc4b449ab4-kube-api-access-t2wxz\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.718041 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" event={"ID":"7cc82b71-e69b-4404-843c-afdc4b449ab4","Type":"ContainerDied","Data":"fdee80e9943e158d403dc29b9b1e94205b95ba3a456bb6ad86d009976b38d3e5"} Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.718129 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdee80e9943e158d403dc29b9b1e94205b95ba3a456bb6ad86d009976b38d3e5" Nov 21 14:42:03 crc kubenswrapper[4904]: I1121 14:42:03.718148 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.044762 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc"] Nov 21 14:42:04 crc kubenswrapper[4904]: E1121 14:42:04.045807 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc82b71-e69b-4404-843c-afdc4b449ab4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.045854 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc82b71-e69b-4404-843c-afdc4b449ab4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.046369 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc82b71-e69b-4404-843c-afdc4b449ab4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.048229 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.052680 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.052738 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.052841 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.055170 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc"] Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.053035 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.053176 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.053182 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.134512 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.134926 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.135465 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8lt9\" (UniqueName: \"kubernetes.io/projected/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-kube-api-access-t8lt9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.135881 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.136017 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.136107 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.239979 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.240187 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8lt9\" (UniqueName: \"kubernetes.io/projected/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-kube-api-access-t8lt9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.240322 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.240366 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.240413 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.240513 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.265906 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.266836 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.267035 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.267053 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.272307 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8lt9\" (UniqueName: \"kubernetes.io/projected/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-kube-api-access-t8lt9\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.272511 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:04 crc kubenswrapper[4904]: I1121 14:42:04.419871 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:42:05 crc kubenswrapper[4904]: I1121 14:42:05.006582 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc"] Nov 21 14:42:05 crc kubenswrapper[4904]: I1121 14:42:05.743890 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" event={"ID":"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19","Type":"ContainerStarted","Data":"c49baf06beb6ede05f63cab8f56c196324fc40b9936311287f8cd38c67a382ee"} Nov 21 14:42:06 crc kubenswrapper[4904]: I1121 14:42:06.765368 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" event={"ID":"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19","Type":"ContainerStarted","Data":"a99eb0cd9792273dec221516aeae61833f1fe7d49fb6dd72ce8b6fa3a1ffc92b"} Nov 21 14:42:06 crc kubenswrapper[4904]: I1121 14:42:06.807718 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" podStartSLOduration=2.213221399 podStartE2EDuration="2.807681937s" podCreationTimestamp="2025-11-21 14:42:04 +0000 UTC" firstStartedPulling="2025-11-21 14:42:05.015261432 +0000 UTC m=+4199.136794004" lastFinishedPulling="2025-11-21 14:42:05.60972197 +0000 UTC m=+4199.731254542" observedRunningTime="2025-11-21 14:42:06.790699957 +0000 UTC m=+4200.912232549" watchObservedRunningTime="2025-11-21 14:42:06.807681937 +0000 UTC m=+4200.929214529" Nov 21 14:42:16 crc kubenswrapper[4904]: I1121 14:42:16.960114 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whbdk"] Nov 21 14:42:16 crc kubenswrapper[4904]: I1121 14:42:16.980595 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:16 crc kubenswrapper[4904]: I1121 14:42:16.990544 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whbdk"] Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.120455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-utilities\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.120896 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw9d\" (UniqueName: \"kubernetes.io/projected/45a5c664-b036-454f-afc8-6024fe6a116d-kube-api-access-fsw9d\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.120924 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-catalog-content\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.222982 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-utilities\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.223142 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsw9d\" (UniqueName: \"kubernetes.io/projected/45a5c664-b036-454f-afc8-6024fe6a116d-kube-api-access-fsw9d\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.223170 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-catalog-content\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.223623 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-utilities\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.223716 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-catalog-content\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.248636 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsw9d\" (UniqueName: \"kubernetes.io/projected/45a5c664-b036-454f-afc8-6024fe6a116d-kube-api-access-fsw9d\") pod \"community-operators-whbdk\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.314768 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:17 crc kubenswrapper[4904]: I1121 14:42:17.903020 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whbdk"] Nov 21 14:42:18 crc kubenswrapper[4904]: I1121 14:42:18.929042 4904 generic.go:334] "Generic (PLEG): container finished" podID="45a5c664-b036-454f-afc8-6024fe6a116d" containerID="a7af8f8d426c98d037886278562007344f477b9a4d737ad296ab4bb0f3c8f148" exitCode=0 Nov 21 14:42:18 crc kubenswrapper[4904]: I1121 14:42:18.929172 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whbdk" event={"ID":"45a5c664-b036-454f-afc8-6024fe6a116d","Type":"ContainerDied","Data":"a7af8f8d426c98d037886278562007344f477b9a4d737ad296ab4bb0f3c8f148"} Nov 21 14:42:18 crc kubenswrapper[4904]: I1121 14:42:18.929429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whbdk" event={"ID":"45a5c664-b036-454f-afc8-6024fe6a116d","Type":"ContainerStarted","Data":"e1f7112af6996e84b92f63735aa2bed5f957c77e203c7be22bbbc1e094e67b1f"} Nov 21 14:42:20 crc kubenswrapper[4904]: I1121 14:42:20.955269 4904 generic.go:334] "Generic (PLEG): container finished" podID="45a5c664-b036-454f-afc8-6024fe6a116d" containerID="e573dc52ed5f835092f8fc5dd2d56de86e5f8e83f4bba14b8c5653bf4f3da866" exitCode=0 Nov 21 14:42:20 crc kubenswrapper[4904]: I1121 14:42:20.955385 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whbdk" event={"ID":"45a5c664-b036-454f-afc8-6024fe6a116d","Type":"ContainerDied","Data":"e573dc52ed5f835092f8fc5dd2d56de86e5f8e83f4bba14b8c5653bf4f3da866"} Nov 21 14:42:21 crc kubenswrapper[4904]: I1121 14:42:21.970841 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whbdk" event={"ID":"45a5c664-b036-454f-afc8-6024fe6a116d","Type":"ContainerStarted","Data":"9af752cdcf9e07033ab31c744dd327dbf5396528d51dbb1e413ccf595b83923f"} Nov 21 14:42:21 crc kubenswrapper[4904]: I1121 14:42:21.998322 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whbdk" podStartSLOduration=3.512803169 podStartE2EDuration="5.998300087s" podCreationTimestamp="2025-11-21 14:42:16 +0000 UTC" firstStartedPulling="2025-11-21 14:42:18.932448251 +0000 UTC m=+4213.053980803" lastFinishedPulling="2025-11-21 14:42:21.417945169 +0000 UTC m=+4215.539477721" observedRunningTime="2025-11-21 14:42:21.986954242 +0000 UTC m=+4216.108486794" watchObservedRunningTime="2025-11-21 14:42:21.998300087 +0000 UTC m=+4216.119832639" Nov 21 14:42:27 crc kubenswrapper[4904]: I1121 14:42:27.315243 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:27 crc kubenswrapper[4904]: I1121 14:42:27.315850 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:27 crc kubenswrapper[4904]: I1121 14:42:27.394480 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:28 crc kubenswrapper[4904]: I1121 14:42:28.113546 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:42:28 crc kubenswrapper[4904]: I1121 14:42:28.113629 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:42:28 crc kubenswrapper[4904]: I1121 14:42:28.137908 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:28 crc kubenswrapper[4904]: I1121 14:42:28.207969 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whbdk"] Nov 21 14:42:30 crc kubenswrapper[4904]: I1121 14:42:30.086872 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-whbdk" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="registry-server" containerID="cri-o://9af752cdcf9e07033ab31c744dd327dbf5396528d51dbb1e413ccf595b83923f" gracePeriod=2 Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.103032 4904 generic.go:334] "Generic (PLEG): container finished" podID="45a5c664-b036-454f-afc8-6024fe6a116d" containerID="9af752cdcf9e07033ab31c744dd327dbf5396528d51dbb1e413ccf595b83923f" exitCode=0 Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.103156 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whbdk" event={"ID":"45a5c664-b036-454f-afc8-6024fe6a116d","Type":"ContainerDied","Data":"9af752cdcf9e07033ab31c744dd327dbf5396528d51dbb1e413ccf595b83923f"} Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.104086 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whbdk" event={"ID":"45a5c664-b036-454f-afc8-6024fe6a116d","Type":"ContainerDied","Data":"e1f7112af6996e84b92f63735aa2bed5f957c77e203c7be22bbbc1e094e67b1f"} Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.104122 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f7112af6996e84b92f63735aa2bed5f957c77e203c7be22bbbc1e094e67b1f" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.111481 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.117270 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-catalog-content\") pod \"45a5c664-b036-454f-afc8-6024fe6a116d\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.117401 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsw9d\" (UniqueName: \"kubernetes.io/projected/45a5c664-b036-454f-afc8-6024fe6a116d-kube-api-access-fsw9d\") pod \"45a5c664-b036-454f-afc8-6024fe6a116d\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.117467 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-utilities\") pod \"45a5c664-b036-454f-afc8-6024fe6a116d\" (UID: \"45a5c664-b036-454f-afc8-6024fe6a116d\") " Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.119939 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-utilities" (OuterVolumeSpecName: "utilities") pod "45a5c664-b036-454f-afc8-6024fe6a116d" (UID: "45a5c664-b036-454f-afc8-6024fe6a116d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.127954 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a5c664-b036-454f-afc8-6024fe6a116d-kube-api-access-fsw9d" (OuterVolumeSpecName: "kube-api-access-fsw9d") pod "45a5c664-b036-454f-afc8-6024fe6a116d" (UID: "45a5c664-b036-454f-afc8-6024fe6a116d"). InnerVolumeSpecName "kube-api-access-fsw9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.196436 4904 scope.go:117] "RemoveContainer" containerID="9afba735ac6d244260e59ecb2abd04d8ebb9f7df3b3854272f45f6e93b2d4a90" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.221872 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsw9d\" (UniqueName: \"kubernetes.io/projected/45a5c664-b036-454f-afc8-6024fe6a116d-kube-api-access-fsw9d\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.221924 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:31 crc kubenswrapper[4904]: I1121 14:42:31.231833 4904 scope.go:117] "RemoveContainer" containerID="3526a60263e560d57f1c3ace0df7d590a2d4ce99eeb49a37b1a25bc0f357c4b7" Nov 21 14:42:32 crc kubenswrapper[4904]: I1121 14:42:32.125871 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whbdk" Nov 21 14:42:32 crc kubenswrapper[4904]: I1121 14:42:32.180644 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a5c664-b036-454f-afc8-6024fe6a116d" (UID: "45a5c664-b036-454f-afc8-6024fe6a116d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:42:32 crc kubenswrapper[4904]: I1121 14:42:32.251425 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5c664-b036-454f-afc8-6024fe6a116d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:42:32 crc kubenswrapper[4904]: I1121 14:42:32.484739 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whbdk"] Nov 21 14:42:32 crc kubenswrapper[4904]: I1121 14:42:32.499864 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-whbdk"] Nov 21 14:42:32 crc kubenswrapper[4904]: I1121 14:42:32.530529 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" path="/var/lib/kubelet/pods/45a5c664-b036-454f-afc8-6024fe6a116d/volumes" Nov 21 14:42:58 crc kubenswrapper[4904]: I1121 14:42:58.114335 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:42:58 crc kubenswrapper[4904]: I1121 14:42:58.115412 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.114498 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.115604 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.115832 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.117735 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.117839 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" gracePeriod=600 Nov 21 14:43:28 crc kubenswrapper[4904]: E1121 14:43:28.392985 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:43:28 crc kubenswrapper[4904]: E1121 14:43:28.404121 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e1548b_c40d_450b_a2f1_51e56c467178.slice/crio-conmon-17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e1548b_c40d_450b_a2f1_51e56c467178.slice/crio-17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.938356 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" exitCode=0 Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.938423 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7"} Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.938475 4904 scope.go:117] "RemoveContainer" containerID="64ae5b3e8909be4ed6a624710ffc81ffb8957c8ae45fd8e8355f9a26c1c1eaf5" Nov 21 14:43:28 crc kubenswrapper[4904]: I1121 14:43:28.940058 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:43:28 crc kubenswrapper[4904]: E1121 14:43:28.940900 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:43:31 crc kubenswrapper[4904]: I1121 14:43:31.321160 4904 scope.go:117] "RemoveContainer" containerID="91c637e3a48f6a3fb2164406f1b52ee4c765e1475746408b55ac1ddc57500e8a" Nov 21 14:43:44 crc kubenswrapper[4904]: I1121 14:43:44.513983 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:43:44 crc kubenswrapper[4904]: E1121 14:43:44.515041 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:43:57 crc kubenswrapper[4904]: I1121 14:43:57.513512 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:43:57 crc kubenswrapper[4904]: E1121 14:43:57.514998 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:44:08 crc kubenswrapper[4904]: I1121 14:44:08.513966 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:44:08 crc kubenswrapper[4904]: E1121 14:44:08.514997 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:44:22 crc kubenswrapper[4904]: I1121 14:44:22.513850 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:44:22 crc kubenswrapper[4904]: E1121 14:44:22.515284 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:44:34 crc kubenswrapper[4904]: I1121 14:44:34.514466 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:44:34 crc kubenswrapper[4904]: E1121 14:44:34.516020 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:44:46 crc kubenswrapper[4904]: I1121 14:44:46.559068 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:44:46 crc kubenswrapper[4904]: E1121 14:44:46.560296 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.156638 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr"] Nov 21 14:45:00 crc kubenswrapper[4904]: E1121 14:45:00.157821 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="extract-utilities" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.157838 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="extract-utilities" Nov 21 14:45:00 crc kubenswrapper[4904]: E1121 14:45:00.157858 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.157864 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4904]: E1121 14:45:00.157900 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="extract-content" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.157908 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="extract-content" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.161240 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a5c664-b036-454f-afc8-6024fe6a116d" containerName="registry-server" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.162411 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.166603 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.168296 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.169147 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr"] Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.315841 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-config-volume\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.316617 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwjqn\" (UniqueName: \"kubernetes.io/projected/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-kube-api-access-xwjqn\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.316773 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-secret-volume\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.419730 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-secret-volume\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.419900 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-config-volume\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.420198 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwjqn\" (UniqueName: \"kubernetes.io/projected/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-kube-api-access-xwjqn\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.421051 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-config-volume\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.950122 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwjqn\" (UniqueName: \"kubernetes.io/projected/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-kube-api-access-xwjqn\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:00 crc kubenswrapper[4904]: I1121 14:45:00.954269 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-secret-volume\") pod \"collect-profiles-29395605-9dsgr\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:01 crc kubenswrapper[4904]: I1121 14:45:01.109750 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:01 crc kubenswrapper[4904]: I1121 14:45:01.514593 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:45:01 crc kubenswrapper[4904]: E1121 14:45:01.515547 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:45:01 crc kubenswrapper[4904]: I1121 14:45:01.673260 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr"] Nov 21 14:45:02 crc kubenswrapper[4904]: I1121 14:45:02.168782 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" event={"ID":"e2c9eb7f-cc64-4b66-975d-44cba0c94c78","Type":"ContainerStarted","Data":"7571c8fbde2659fa72a90ed5b3d023e0ab441740c41f5a2dbf8c4f5d875c0c5f"} Nov 21 14:45:02 crc kubenswrapper[4904]: I1121 14:45:02.169336 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" event={"ID":"e2c9eb7f-cc64-4b66-975d-44cba0c94c78","Type":"ContainerStarted","Data":"9c6f842646e783891cc874333c9d9808bbf9138f1af59dade3b1c6d646929470"} Nov 21 14:45:02 crc kubenswrapper[4904]: I1121 14:45:02.211199 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" podStartSLOduration=2.211171828 podStartE2EDuration="2.211171828s" podCreationTimestamp="2025-11-21 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 14:45:02.198693564 +0000 UTC m=+4376.320226116" watchObservedRunningTime="2025-11-21 14:45:02.211171828 +0000 UTC m=+4376.332704380" Nov 21 14:45:03 crc kubenswrapper[4904]: I1121 14:45:03.182612 4904 generic.go:334] "Generic (PLEG): container finished" podID="e2c9eb7f-cc64-4b66-975d-44cba0c94c78" containerID="7571c8fbde2659fa72a90ed5b3d023e0ab441740c41f5a2dbf8c4f5d875c0c5f" exitCode=0 Nov 21 14:45:03 crc kubenswrapper[4904]: I1121 14:45:03.182713 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" event={"ID":"e2c9eb7f-cc64-4b66-975d-44cba0c94c78","Type":"ContainerDied","Data":"7571c8fbde2659fa72a90ed5b3d023e0ab441740c41f5a2dbf8c4f5d875c0c5f"} Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.741940 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.846012 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-secret-volume\") pod \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.846368 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwjqn\" (UniqueName: \"kubernetes.io/projected/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-kube-api-access-xwjqn\") pod \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.846435 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-config-volume\") pod \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\" (UID: \"e2c9eb7f-cc64-4b66-975d-44cba0c94c78\") " Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.847425 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2c9eb7f-cc64-4b66-975d-44cba0c94c78" (UID: "e2c9eb7f-cc64-4b66-975d-44cba0c94c78"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.853855 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-kube-api-access-xwjqn" (OuterVolumeSpecName: "kube-api-access-xwjqn") pod "e2c9eb7f-cc64-4b66-975d-44cba0c94c78" (UID: "e2c9eb7f-cc64-4b66-975d-44cba0c94c78"). InnerVolumeSpecName "kube-api-access-xwjqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.857452 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2c9eb7f-cc64-4b66-975d-44cba0c94c78" (UID: "e2c9eb7f-cc64-4b66-975d-44cba0c94c78"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.949055 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwjqn\" (UniqueName: \"kubernetes.io/projected/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-kube-api-access-xwjqn\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.949107 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:04 crc kubenswrapper[4904]: I1121 14:45:04.949123 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2c9eb7f-cc64-4b66-975d-44cba0c94c78-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:05 crc kubenswrapper[4904]: I1121 14:45:05.210844 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" event={"ID":"e2c9eb7f-cc64-4b66-975d-44cba0c94c78","Type":"ContainerDied","Data":"9c6f842646e783891cc874333c9d9808bbf9138f1af59dade3b1c6d646929470"} Nov 21 14:45:05 crc kubenswrapper[4904]: I1121 14:45:05.210901 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6f842646e783891cc874333c9d9808bbf9138f1af59dade3b1c6d646929470" Nov 21 14:45:05 crc kubenswrapper[4904]: I1121 14:45:05.210910 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr" Nov 21 14:45:05 crc kubenswrapper[4904]: I1121 14:45:05.845376 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz"] Nov 21 14:45:05 crc kubenswrapper[4904]: I1121 14:45:05.857054 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395560-gpdbz"] Nov 21 14:45:06 crc kubenswrapper[4904]: I1121 14:45:06.530768 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19b62d5-1edd-42a2-b023-9f7f4e71c368" path="/var/lib/kubelet/pods/f19b62d5-1edd-42a2-b023-9f7f4e71c368/volumes" Nov 21 14:45:13 crc kubenswrapper[4904]: I1121 14:45:13.513390 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:45:13 crc kubenswrapper[4904]: E1121 14:45:13.515967 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:45:24 crc kubenswrapper[4904]: I1121 14:45:24.514432 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:45:24 crc kubenswrapper[4904]: E1121 14:45:24.515593 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:45:31 crc kubenswrapper[4904]: I1121 14:45:31.414339 4904 scope.go:117] "RemoveContainer" containerID="0030bffab16f6c7e1a5bf21529ffa033da7332d95454adcc04fcae4d791fea82" Nov 21 14:45:35 crc kubenswrapper[4904]: I1121 14:45:35.514489 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:45:35 crc kubenswrapper[4904]: E1121 14:45:35.515560 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:45:48 crc kubenswrapper[4904]: I1121 14:45:48.513627 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:45:48 crc kubenswrapper[4904]: E1121 14:45:48.514763 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:45:50 crc kubenswrapper[4904]: I1121 14:45:50.785211 4904 generic.go:334] "Generic (PLEG): container finished" podID="b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" containerID="a99eb0cd9792273dec221516aeae61833f1fe7d49fb6dd72ce8b6fa3a1ffc92b" exitCode=0 Nov 21 14:45:50 crc kubenswrapper[4904]: I1121 14:45:50.785345 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" event={"ID":"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19","Type":"ContainerDied","Data":"a99eb0cd9792273dec221516aeae61833f1fe7d49fb6dd72ce8b6fa3a1ffc92b"} Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.326137 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.363090 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-inventory\") pod \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.363439 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ssh-key\") pod \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.363486 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-secret-0\") pod \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.363598 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ceph\") pod \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.363672 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8lt9\" (UniqueName: \"kubernetes.io/projected/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-kube-api-access-t8lt9\") pod \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.363724 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-combined-ca-bundle\") pod \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\" (UID: \"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19\") " Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.378137 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" (UID: "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.378212 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ceph" (OuterVolumeSpecName: "ceph") pod "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" (UID: "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.378304 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-kube-api-access-t8lt9" (OuterVolumeSpecName: "kube-api-access-t8lt9") pod "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" (UID: "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19"). InnerVolumeSpecName "kube-api-access-t8lt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.410311 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" (UID: "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.416312 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" (UID: "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.437037 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-inventory" (OuterVolumeSpecName: "inventory") pod "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" (UID: "b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.466839 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.467277 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.467391 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.467478 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8lt9\" (UniqueName: \"kubernetes.io/projected/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-kube-api-access-t8lt9\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.467575 4904 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.467744 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.812751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" event={"ID":"b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19","Type":"ContainerDied","Data":"c49baf06beb6ede05f63cab8f56c196324fc40b9936311287f8cd38c67a382ee"} Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.812802 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c49baf06beb6ede05f63cab8f56c196324fc40b9936311287f8cd38c67a382ee" Nov 21 14:45:52 crc kubenswrapper[4904]: I1121 14:45:52.812805 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.020566 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl"] Nov 21 14:45:53 crc kubenswrapper[4904]: E1121 14:45:53.021253 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c9eb7f-cc64-4b66-975d-44cba0c94c78" containerName="collect-profiles" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.021277 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c9eb7f-cc64-4b66-975d-44cba0c94c78" containerName="collect-profiles" Nov 21 14:45:53 crc kubenswrapper[4904]: E1121 14:45:53.021306 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.021316 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.021884 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.021922 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c9eb7f-cc64-4b66-975d-44cba0c94c78" containerName="collect-profiles" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.023604 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.028839 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.031119 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.031335 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.033399 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.034266 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.034639 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.034923 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.035176 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.035431 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.036113 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl"] Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.191561 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8svr7\" (UniqueName: \"kubernetes.io/projected/f238fb8b-7193-4412-ac72-19c3161f2735-kube-api-access-8svr7\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.191686 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.191758 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.191913 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.191938 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.191969 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.192076 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.192122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.192310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.192413 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.192471 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294776 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294839 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294890 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294910 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294930 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294950 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294966 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.294994 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.295023 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.295043 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.295104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8svr7\" (UniqueName: \"kubernetes.io/projected/f238fb8b-7193-4412-ac72-19c3161f2735-kube-api-access-8svr7\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.297292 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.297508 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.302132 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.303295 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.304946 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.305226 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.302439 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.307960 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.308347 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.313463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.314714 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8svr7\" (UniqueName: \"kubernetes.io/projected/f238fb8b-7193-4412-ac72-19c3161f2735-kube-api-access-8svr7\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.360404 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.963699 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl"] Nov 21 14:45:53 crc kubenswrapper[4904]: I1121 14:45:53.969753 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:45:54 crc kubenswrapper[4904]: I1121 14:45:54.838174 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" event={"ID":"f238fb8b-7193-4412-ac72-19c3161f2735","Type":"ContainerStarted","Data":"f0219cff23bdd419e3cdb27dd4c55a4f6092741b4923928942580f3545d58235"} Nov 21 14:45:55 crc kubenswrapper[4904]: I1121 14:45:55.854010 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" event={"ID":"f238fb8b-7193-4412-ac72-19c3161f2735","Type":"ContainerStarted","Data":"241efc736c5670b12173647a60beae403cc97f7444a6c8558ee371b377a267b1"} Nov 21 14:45:55 crc kubenswrapper[4904]: I1121 14:45:55.880027 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" podStartSLOduration=3.234328942 podStartE2EDuration="3.880007336s" podCreationTimestamp="2025-11-21 14:45:52 +0000 UTC" firstStartedPulling="2025-11-21 14:45:53.96943249 +0000 UTC m=+4428.090965042" lastFinishedPulling="2025-11-21 14:45:54.615110884 +0000 UTC m=+4428.736643436" observedRunningTime="2025-11-21 14:45:55.876624653 +0000 UTC m=+4429.998157225" watchObservedRunningTime="2025-11-21 14:45:55.880007336 +0000 UTC m=+4430.001539888" Nov 21 14:45:59 crc kubenswrapper[4904]: I1121 14:45:59.515221 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:45:59 crc kubenswrapper[4904]: E1121 14:45:59.517913 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:46:13 crc kubenswrapper[4904]: I1121 14:46:13.513300 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:46:13 crc kubenswrapper[4904]: E1121 14:46:13.514381 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:46:26 crc kubenswrapper[4904]: I1121 14:46:26.521865 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:46:26 crc kubenswrapper[4904]: E1121 14:46:26.523650 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.174176 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-25pdn"] Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.177400 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.196142 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25pdn"] Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.307331 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtq5\" (UniqueName: \"kubernetes.io/projected/64d3e12a-8d16-4958-b299-dcd6d7ad84de-kube-api-access-8rtq5\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.307617 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-utilities\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.308210 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-catalog-content\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.372319 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vrcx9"] Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.374746 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.387547 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrcx9"] Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.411358 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-catalog-content\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.411499 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rtq5\" (UniqueName: \"kubernetes.io/projected/64d3e12a-8d16-4958-b299-dcd6d7ad84de-kube-api-access-8rtq5\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.411545 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-utilities\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.412193 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-utilities\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.412227 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-catalog-content\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.456451 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rtq5\" (UniqueName: \"kubernetes.io/projected/64d3e12a-8d16-4958-b299-dcd6d7ad84de-kube-api-access-8rtq5\") pod \"redhat-marketplace-25pdn\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.502927 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.514259 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-utilities\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.514709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.514794 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x8k\" (UniqueName: \"kubernetes.io/projected/15634596-86b5-4901-b629-6a99beba80db-kube-api-access-97x8k\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.617942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-utilities\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.618475 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.618561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x8k\" (UniqueName: \"kubernetes.io/projected/15634596-86b5-4901-b629-6a99beba80db-kube-api-access-97x8k\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.618648 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-utilities\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.618942 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.655097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x8k\" (UniqueName: \"kubernetes.io/projected/15634596-86b5-4901-b629-6a99beba80db-kube-api-access-97x8k\") pod \"redhat-operators-vrcx9\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:37 crc kubenswrapper[4904]: I1121 14:46:37.709129 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:38 crc kubenswrapper[4904]: I1121 14:46:38.182884 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25pdn"] Nov 21 14:46:38 crc kubenswrapper[4904]: I1121 14:46:38.335288 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrcx9"] Nov 21 14:46:38 crc kubenswrapper[4904]: I1121 14:46:38.458712 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerStarted","Data":"c8ad59d1dcdf1f943222f63c0c16024638dec429901413ac792c566ce2cb8e78"} Nov 21 14:46:38 crc kubenswrapper[4904]: I1121 14:46:38.460081 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerStarted","Data":"397525cd9bf5870dc2d17e67ad43b7d16e51d7b3cb320f4d7a58bb80c406c172"} Nov 21 14:46:39 crc kubenswrapper[4904]: I1121 14:46:39.475758 4904 generic.go:334] "Generic (PLEG): container finished" podID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerID="573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79" exitCode=0 Nov 21 14:46:39 crc kubenswrapper[4904]: I1121 14:46:39.475831 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerDied","Data":"573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79"} Nov 21 14:46:39 crc kubenswrapper[4904]: I1121 14:46:39.481947 4904 generic.go:334] "Generic (PLEG): container finished" podID="15634596-86b5-4901-b629-6a99beba80db" containerID="dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76" exitCode=0 Nov 21 14:46:39 crc kubenswrapper[4904]: I1121 14:46:39.481997 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerDied","Data":"dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76"} Nov 21 14:46:40 crc kubenswrapper[4904]: I1121 14:46:40.514118 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:46:40 crc kubenswrapper[4904]: E1121 14:46:40.514825 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:46:41 crc kubenswrapper[4904]: I1121 14:46:41.509118 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerStarted","Data":"f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2"} Nov 21 14:46:41 crc kubenswrapper[4904]: I1121 14:46:41.512364 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerStarted","Data":"759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2"} Nov 21 14:46:43 crc kubenswrapper[4904]: I1121 14:46:43.540357 4904 generic.go:334] "Generic (PLEG): container finished" podID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerID="f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2" exitCode=0 Nov 21 14:46:43 crc kubenswrapper[4904]: I1121 14:46:43.540471 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerDied","Data":"f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2"} Nov 21 14:46:46 crc kubenswrapper[4904]: I1121 14:46:46.578891 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerStarted","Data":"e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266"} Nov 21 14:46:46 crc kubenswrapper[4904]: I1121 14:46:46.604255 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-25pdn" podStartSLOduration=3.17742615 podStartE2EDuration="9.604234288s" podCreationTimestamp="2025-11-21 14:46:37 +0000 UTC" firstStartedPulling="2025-11-21 14:46:39.478837402 +0000 UTC m=+4473.600369954" lastFinishedPulling="2025-11-21 14:46:45.90564554 +0000 UTC m=+4480.027178092" observedRunningTime="2025-11-21 14:46:46.598072478 +0000 UTC m=+4480.719605030" watchObservedRunningTime="2025-11-21 14:46:46.604234288 +0000 UTC m=+4480.725766840" Nov 21 14:46:47 crc kubenswrapper[4904]: I1121 14:46:47.503731 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:47 crc kubenswrapper[4904]: I1121 14:46:47.503801 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:46:48 crc kubenswrapper[4904]: I1121 14:46:48.570353 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-25pdn" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="registry-server" probeResult="failure" output=< Nov 21 14:46:48 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:46:48 crc kubenswrapper[4904]: > Nov 21 14:46:51 crc kubenswrapper[4904]: I1121 14:46:51.514035 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:46:51 crc kubenswrapper[4904]: E1121 14:46:51.515287 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:46:52 crc kubenswrapper[4904]: I1121 14:46:52.663396 4904 generic.go:334] "Generic (PLEG): container finished" podID="15634596-86b5-4901-b629-6a99beba80db" containerID="759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2" exitCode=0 Nov 21 14:46:52 crc kubenswrapper[4904]: I1121 14:46:52.663469 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerDied","Data":"759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2"} Nov 21 14:46:55 crc kubenswrapper[4904]: I1121 14:46:55.706833 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerStarted","Data":"36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2"} Nov 21 14:46:55 crc kubenswrapper[4904]: I1121 14:46:55.745260 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vrcx9" podStartSLOduration=4.52160107 podStartE2EDuration="18.745227791s" podCreationTimestamp="2025-11-21 14:46:37 +0000 UTC" firstStartedPulling="2025-11-21 14:46:39.484012548 +0000 UTC m=+4473.605545110" lastFinishedPulling="2025-11-21 14:46:53.707639279 +0000 UTC m=+4487.829171831" observedRunningTime="2025-11-21 14:46:55.724798403 +0000 UTC m=+4489.846330955" watchObservedRunningTime="2025-11-21 14:46:55.745227791 +0000 UTC m=+4489.866760363" Nov 21 14:46:57 crc kubenswrapper[4904]: I1121 14:46:57.713415 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:57 crc kubenswrapper[4904]: I1121 14:46:57.713913 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:46:58 crc kubenswrapper[4904]: I1121 14:46:58.696825 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-25pdn" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="registry-server" probeResult="failure" output=< Nov 21 14:46:58 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:46:58 crc kubenswrapper[4904]: > Nov 21 14:46:58 crc kubenswrapper[4904]: I1121 14:46:58.774498 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrcx9" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" probeResult="failure" output=< Nov 21 14:46:58 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:46:58 crc kubenswrapper[4904]: > Nov 21 14:47:03 crc kubenswrapper[4904]: I1121 14:47:03.514105 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:47:03 crc kubenswrapper[4904]: E1121 14:47:03.515160 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:47:07 crc kubenswrapper[4904]: I1121 14:47:07.798389 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:47:07 crc kubenswrapper[4904]: I1121 14:47:07.855430 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:47:08 crc kubenswrapper[4904]: I1121 14:47:08.384056 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25pdn"] Nov 21 14:47:08 crc kubenswrapper[4904]: I1121 14:47:08.825736 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrcx9" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" probeResult="failure" output=< Nov 21 14:47:08 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:47:08 crc kubenswrapper[4904]: > Nov 21 14:47:08 crc kubenswrapper[4904]: I1121 14:47:08.872859 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-25pdn" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="registry-server" containerID="cri-o://e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266" gracePeriod=2 Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.410106 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.547196 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rtq5\" (UniqueName: \"kubernetes.io/projected/64d3e12a-8d16-4958-b299-dcd6d7ad84de-kube-api-access-8rtq5\") pod \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.547347 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-catalog-content\") pod \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.547519 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-utilities\") pod \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\" (UID: \"64d3e12a-8d16-4958-b299-dcd6d7ad84de\") " Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.548921 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-utilities" (OuterVolumeSpecName: "utilities") pod "64d3e12a-8d16-4958-b299-dcd6d7ad84de" (UID: "64d3e12a-8d16-4958-b299-dcd6d7ad84de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.577762 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64d3e12a-8d16-4958-b299-dcd6d7ad84de" (UID: "64d3e12a-8d16-4958-b299-dcd6d7ad84de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.650510 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.650552 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d3e12a-8d16-4958-b299-dcd6d7ad84de-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.889177 4904 generic.go:334] "Generic (PLEG): container finished" podID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerID="e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266" exitCode=0 Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.889312 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerDied","Data":"e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266"} Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.889841 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25pdn" event={"ID":"64d3e12a-8d16-4958-b299-dcd6d7ad84de","Type":"ContainerDied","Data":"c8ad59d1dcdf1f943222f63c0c16024638dec429901413ac792c566ce2cb8e78"} Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.889880 4904 scope.go:117] "RemoveContainer" containerID="e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.889343 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25pdn" Nov 21 14:47:09 crc kubenswrapper[4904]: I1121 14:47:09.919905 4904 scope.go:117] "RemoveContainer" containerID="f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.569850 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d3e12a-8d16-4958-b299-dcd6d7ad84de-kube-api-access-8rtq5" (OuterVolumeSpecName: "kube-api-access-8rtq5") pod "64d3e12a-8d16-4958-b299-dcd6d7ad84de" (UID: "64d3e12a-8d16-4958-b299-dcd6d7ad84de"). InnerVolumeSpecName "kube-api-access-8rtq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.577457 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rtq5\" (UniqueName: \"kubernetes.io/projected/64d3e12a-8d16-4958-b299-dcd6d7ad84de-kube-api-access-8rtq5\") on node \"crc\" DevicePath \"\"" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.608488 4904 scope.go:117] "RemoveContainer" containerID="573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.709724 4904 scope.go:117] "RemoveContainer" containerID="e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266" Nov 21 14:47:10 crc kubenswrapper[4904]: E1121 14:47:10.710455 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266\": container with ID starting with e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266 not found: ID does not exist" containerID="e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.710498 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266"} err="failed to get container status \"e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266\": rpc error: code = NotFound desc = could not find container \"e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266\": container with ID starting with e74bdebaa9cbb7317851cd328dd5d8e3d3c900cf0868b812fa48d0a6d0540266 not found: ID does not exist" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.710524 4904 scope.go:117] "RemoveContainer" containerID="f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2" Nov 21 14:47:10 crc kubenswrapper[4904]: E1121 14:47:10.711127 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2\": container with ID starting with f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2 not found: ID does not exist" containerID="f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.711159 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2"} err="failed to get container status \"f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2\": rpc error: code = NotFound desc = could not find container \"f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2\": container with ID starting with f95421fe36bc952142bf86066b6fce1ae2ef864f5bf448d0a2d96c54f5876af2 not found: ID does not exist" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.711181 4904 scope.go:117] "RemoveContainer" containerID="573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79" Nov 21 14:47:10 crc kubenswrapper[4904]: E1121 14:47:10.711513 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79\": container with ID starting with 573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79 not found: ID does not exist" containerID="573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.711543 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79"} err="failed to get container status \"573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79\": rpc error: code = NotFound desc = could not find container \"573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79\": container with ID starting with 573ce9204b482426a32953782d76571a06376a12ee39e5c67be3e37bb14e0a79 not found: ID does not exist" Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.838423 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25pdn"] Nov 21 14:47:10 crc kubenswrapper[4904]: I1121 14:47:10.853623 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-25pdn"] Nov 21 14:47:12 crc kubenswrapper[4904]: I1121 14:47:12.526171 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" path="/var/lib/kubelet/pods/64d3e12a-8d16-4958-b299-dcd6d7ad84de/volumes" Nov 21 14:47:16 crc kubenswrapper[4904]: I1121 14:47:16.513786 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:47:16 crc kubenswrapper[4904]: E1121 14:47:16.514853 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:47:18 crc kubenswrapper[4904]: I1121 14:47:18.777108 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrcx9" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" probeResult="failure" output=< Nov 21 14:47:18 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:47:18 crc kubenswrapper[4904]: > Nov 21 14:47:28 crc kubenswrapper[4904]: I1121 14:47:28.778270 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrcx9" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" probeResult="failure" output=< Nov 21 14:47:28 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:47:28 crc kubenswrapper[4904]: > Nov 21 14:47:29 crc kubenswrapper[4904]: I1121 14:47:29.514729 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:47:29 crc kubenswrapper[4904]: E1121 14:47:29.515251 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:47:37 crc kubenswrapper[4904]: I1121 14:47:37.775511 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:47:37 crc kubenswrapper[4904]: I1121 14:47:37.831562 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:47:38 crc kubenswrapper[4904]: I1121 14:47:38.400747 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrcx9"] Nov 21 14:47:39 crc kubenswrapper[4904]: I1121 14:47:39.259717 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrcx9" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" containerID="cri-o://36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2" gracePeriod=2 Nov 21 14:47:39 crc kubenswrapper[4904]: I1121 14:47:39.841449 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.016378 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-utilities\") pod \"15634596-86b5-4901-b629-6a99beba80db\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.016495 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97x8k\" (UniqueName: \"kubernetes.io/projected/15634596-86b5-4901-b629-6a99beba80db-kube-api-access-97x8k\") pod \"15634596-86b5-4901-b629-6a99beba80db\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.017013 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content\") pod \"15634596-86b5-4901-b629-6a99beba80db\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.017522 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-utilities" (OuterVolumeSpecName: "utilities") pod "15634596-86b5-4901-b629-6a99beba80db" (UID: "15634596-86b5-4901-b629-6a99beba80db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.018635 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.025064 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15634596-86b5-4901-b629-6a99beba80db-kube-api-access-97x8k" (OuterVolumeSpecName: "kube-api-access-97x8k") pod "15634596-86b5-4901-b629-6a99beba80db" (UID: "15634596-86b5-4901-b629-6a99beba80db"). InnerVolumeSpecName "kube-api-access-97x8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.119025 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15634596-86b5-4901-b629-6a99beba80db" (UID: "15634596-86b5-4901-b629-6a99beba80db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.119934 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content\") pod \"15634596-86b5-4901-b629-6a99beba80db\" (UID: \"15634596-86b5-4901-b629-6a99beba80db\") " Nov 21 14:47:40 crc kubenswrapper[4904]: W1121 14:47:40.120138 4904 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/15634596-86b5-4901-b629-6a99beba80db/volumes/kubernetes.io~empty-dir/catalog-content Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.120192 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15634596-86b5-4901-b629-6a99beba80db" (UID: "15634596-86b5-4901-b629-6a99beba80db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.120800 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97x8k\" (UniqueName: \"kubernetes.io/projected/15634596-86b5-4901-b629-6a99beba80db-kube-api-access-97x8k\") on node \"crc\" DevicePath \"\"" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.120830 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15634596-86b5-4901-b629-6a99beba80db-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.274297 4904 generic.go:334] "Generic (PLEG): container finished" podID="15634596-86b5-4901-b629-6a99beba80db" containerID="36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2" exitCode=0 Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.274541 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerDied","Data":"36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2"} Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.275863 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrcx9" event={"ID":"15634596-86b5-4901-b629-6a99beba80db","Type":"ContainerDied","Data":"397525cd9bf5870dc2d17e67ad43b7d16e51d7b3cb320f4d7a58bb80c406c172"} Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.274583 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrcx9" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.276117 4904 scope.go:117] "RemoveContainer" containerID="36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.309568 4904 scope.go:117] "RemoveContainer" containerID="759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.317220 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrcx9"] Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.333028 4904 scope.go:117] "RemoveContainer" containerID="dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.336050 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vrcx9"] Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.386648 4904 scope.go:117] "RemoveContainer" containerID="36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2" Nov 21 14:47:40 crc kubenswrapper[4904]: E1121 14:47:40.387336 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2\": container with ID starting with 36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2 not found: ID does not exist" containerID="36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.387392 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2"} err="failed to get container status \"36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2\": rpc error: code = NotFound desc = could not find container \"36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2\": container with ID starting with 36264e0ef81607a2e4e8236346244c854f1a0dcbab8265cecf370fcec9f66eb2 not found: ID does not exist" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.387426 4904 scope.go:117] "RemoveContainer" containerID="759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2" Nov 21 14:47:40 crc kubenswrapper[4904]: E1121 14:47:40.387871 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2\": container with ID starting with 759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2 not found: ID does not exist" containerID="759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.387930 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2"} err="failed to get container status \"759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2\": rpc error: code = NotFound desc = could not find container \"759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2\": container with ID starting with 759b19aec839916e433f7063959bc7dd9db1523ee1c0a58a919d18890bbe0ce2 not found: ID does not exist" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.387964 4904 scope.go:117] "RemoveContainer" containerID="dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76" Nov 21 14:47:40 crc kubenswrapper[4904]: E1121 14:47:40.388323 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76\": container with ID starting with dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76 not found: ID does not exist" containerID="dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.388352 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76"} err="failed to get container status \"dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76\": rpc error: code = NotFound desc = could not find container \"dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76\": container with ID starting with dc373ee4d9cefb9802f238b08e3cb773ebd8fe4bc6002b1f83f5ab92c9a2ce76 not found: ID does not exist" Nov 21 14:47:40 crc kubenswrapper[4904]: I1121 14:47:40.532982 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15634596-86b5-4901-b629-6a99beba80db" path="/var/lib/kubelet/pods/15634596-86b5-4901-b629-6a99beba80db/volumes" Nov 21 14:47:43 crc kubenswrapper[4904]: I1121 14:47:43.514097 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:47:43 crc kubenswrapper[4904]: E1121 14:47:43.515345 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:47:57 crc kubenswrapper[4904]: I1121 14:47:57.513483 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:47:57 crc kubenswrapper[4904]: E1121 14:47:57.516091 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:48:08 crc kubenswrapper[4904]: I1121 14:48:08.514395 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:48:08 crc kubenswrapper[4904]: E1121 14:48:08.515684 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:48:19 crc kubenswrapper[4904]: I1121 14:48:19.514288 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:48:19 crc kubenswrapper[4904]: E1121 14:48:19.517836 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:48:32 crc kubenswrapper[4904]: I1121 14:48:32.312834 4904 scope.go:117] "RemoveContainer" containerID="e573dc52ed5f835092f8fc5dd2d56de86e5f8e83f4bba14b8c5653bf4f3da866" Nov 21 14:48:32 crc kubenswrapper[4904]: I1121 14:48:32.347434 4904 scope.go:117] "RemoveContainer" containerID="9af752cdcf9e07033ab31c744dd327dbf5396528d51dbb1e413ccf595b83923f" Nov 21 14:48:32 crc kubenswrapper[4904]: I1121 14:48:32.398710 4904 scope.go:117] "RemoveContainer" containerID="a7af8f8d426c98d037886278562007344f477b9a4d737ad296ab4bb0f3c8f148" Nov 21 14:48:34 crc kubenswrapper[4904]: I1121 14:48:34.514296 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:48:34 crc kubenswrapper[4904]: I1121 14:48:34.979316 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"f80a54e3a15fbdc1f6bcdf28edd81f1a732b02b6d7f8869201c38845d1200fa8"} Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.860361 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ld5fc"] Nov 21 14:49:23 crc kubenswrapper[4904]: E1121 14:49:23.863879 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.864071 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" Nov 21 14:49:23 crc kubenswrapper[4904]: E1121 14:49:23.864198 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="extract-content" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.864347 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="extract-content" Nov 21 14:49:23 crc kubenswrapper[4904]: E1121 14:49:23.864553 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="registry-server" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.864691 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="registry-server" Nov 21 14:49:23 crc kubenswrapper[4904]: E1121 14:49:23.864850 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="extract-utilities" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.864981 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="extract-utilities" Nov 21 14:49:23 crc kubenswrapper[4904]: E1121 14:49:23.865163 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="extract-utilities" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.865290 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="extract-utilities" Nov 21 14:49:23 crc kubenswrapper[4904]: E1121 14:49:23.865414 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="extract-content" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.865541 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="extract-content" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.867331 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="15634596-86b5-4901-b629-6a99beba80db" containerName="registry-server" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.867544 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d3e12a-8d16-4958-b299-dcd6d7ad84de" containerName="registry-server" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.870350 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:23 crc kubenswrapper[4904]: I1121 14:49:23.880918 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ld5fc"] Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.000960 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smcqc\" (UniqueName: \"kubernetes.io/projected/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-kube-api-access-smcqc\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.001026 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-utilities\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.001481 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-catalog-content\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.103872 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-catalog-content\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.104466 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-catalog-content\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.104669 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smcqc\" (UniqueName: \"kubernetes.io/projected/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-kube-api-access-smcqc\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.104847 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-utilities\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.105383 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-utilities\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.130823 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smcqc\" (UniqueName: \"kubernetes.io/projected/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-kube-api-access-smcqc\") pod \"certified-operators-ld5fc\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.209206 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:24 crc kubenswrapper[4904]: W1121 14:49:24.759673 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8230e4e3_8464_453a_abcf_8db9ad3fb5ec.slice/crio-b151631090bc5128f41f95d8060f8ff3b19dad4a349b54a0dbc8f4cc74bdbe5e WatchSource:0}: Error finding container b151631090bc5128f41f95d8060f8ff3b19dad4a349b54a0dbc8f4cc74bdbe5e: Status 404 returned error can't find the container with id b151631090bc5128f41f95d8060f8ff3b19dad4a349b54a0dbc8f4cc74bdbe5e Nov 21 14:49:24 crc kubenswrapper[4904]: I1121 14:49:24.768082 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ld5fc"] Nov 21 14:49:25 crc kubenswrapper[4904]: I1121 14:49:25.601505 4904 generic.go:334] "Generic (PLEG): container finished" podID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerID="38a65a1344b717e945507184ecd98305d0a010b3f173b7412f0bad2746a15549" exitCode=0 Nov 21 14:49:25 crc kubenswrapper[4904]: I1121 14:49:25.601623 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerDied","Data":"38a65a1344b717e945507184ecd98305d0a010b3f173b7412f0bad2746a15549"} Nov 21 14:49:25 crc kubenswrapper[4904]: I1121 14:49:25.601906 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerStarted","Data":"b151631090bc5128f41f95d8060f8ff3b19dad4a349b54a0dbc8f4cc74bdbe5e"} Nov 21 14:49:27 crc kubenswrapper[4904]: I1121 14:49:27.627028 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerStarted","Data":"f35ea3af77b9ae3c1b0873f3835e14602f72ee8df3dd31d133fb6fe12fe55a5c"} Nov 21 14:49:28 crc kubenswrapper[4904]: I1121 14:49:28.662180 4904 generic.go:334] "Generic (PLEG): container finished" podID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerID="f35ea3af77b9ae3c1b0873f3835e14602f72ee8df3dd31d133fb6fe12fe55a5c" exitCode=0 Nov 21 14:49:28 crc kubenswrapper[4904]: I1121 14:49:28.662260 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerDied","Data":"f35ea3af77b9ae3c1b0873f3835e14602f72ee8df3dd31d133fb6fe12fe55a5c"} Nov 21 14:49:31 crc kubenswrapper[4904]: I1121 14:49:31.698885 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerStarted","Data":"e80bb6c7acb954e7f96446127c2b6f86abce42c34686c41fbe5e93bb725bf23c"} Nov 21 14:49:31 crc kubenswrapper[4904]: I1121 14:49:31.725584 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ld5fc" podStartSLOduration=4.208937252 podStartE2EDuration="8.725559067s" podCreationTimestamp="2025-11-21 14:49:23 +0000 UTC" firstStartedPulling="2025-11-21 14:49:25.605059714 +0000 UTC m=+4639.726592266" lastFinishedPulling="2025-11-21 14:49:30.121681529 +0000 UTC m=+4644.243214081" observedRunningTime="2025-11-21 14:49:31.721260772 +0000 UTC m=+4645.842793334" watchObservedRunningTime="2025-11-21 14:49:31.725559067 +0000 UTC m=+4645.847091619" Nov 21 14:49:34 crc kubenswrapper[4904]: I1121 14:49:34.209612 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:34 crc kubenswrapper[4904]: I1121 14:49:34.210251 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:34 crc kubenswrapper[4904]: I1121 14:49:34.308304 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:44 crc kubenswrapper[4904]: I1121 14:49:44.341764 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:44 crc kubenswrapper[4904]: I1121 14:49:44.427257 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ld5fc"] Nov 21 14:49:44 crc kubenswrapper[4904]: I1121 14:49:44.861740 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ld5fc" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="registry-server" containerID="cri-o://e80bb6c7acb954e7f96446127c2b6f86abce42c34686c41fbe5e93bb725bf23c" gracePeriod=2 Nov 21 14:49:45 crc kubenswrapper[4904]: I1121 14:49:45.880426 4904 generic.go:334] "Generic (PLEG): container finished" podID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerID="e80bb6c7acb954e7f96446127c2b6f86abce42c34686c41fbe5e93bb725bf23c" exitCode=0 Nov 21 14:49:45 crc kubenswrapper[4904]: I1121 14:49:45.880773 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerDied","Data":"e80bb6c7acb954e7f96446127c2b6f86abce42c34686c41fbe5e93bb725bf23c"} Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.163085 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.333815 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smcqc\" (UniqueName: \"kubernetes.io/projected/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-kube-api-access-smcqc\") pod \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.334133 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-catalog-content\") pod \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.334208 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-utilities\") pod \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\" (UID: \"8230e4e3-8464-453a-abcf-8db9ad3fb5ec\") " Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.335147 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-utilities" (OuterVolumeSpecName: "utilities") pod "8230e4e3-8464-453a-abcf-8db9ad3fb5ec" (UID: "8230e4e3-8464-453a-abcf-8db9ad3fb5ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.357843 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-kube-api-access-smcqc" (OuterVolumeSpecName: "kube-api-access-smcqc") pod "8230e4e3-8464-453a-abcf-8db9ad3fb5ec" (UID: "8230e4e3-8464-453a-abcf-8db9ad3fb5ec"). InnerVolumeSpecName "kube-api-access-smcqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.381881 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8230e4e3-8464-453a-abcf-8db9ad3fb5ec" (UID: "8230e4e3-8464-453a-abcf-8db9ad3fb5ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.437306 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smcqc\" (UniqueName: \"kubernetes.io/projected/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-kube-api-access-smcqc\") on node \"crc\" DevicePath \"\"" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.437346 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.437355 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8230e4e3-8464-453a-abcf-8db9ad3fb5ec-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.898264 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ld5fc" event={"ID":"8230e4e3-8464-453a-abcf-8db9ad3fb5ec","Type":"ContainerDied","Data":"b151631090bc5128f41f95d8060f8ff3b19dad4a349b54a0dbc8f4cc74bdbe5e"} Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.898439 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ld5fc" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.898784 4904 scope.go:117] "RemoveContainer" containerID="e80bb6c7acb954e7f96446127c2b6f86abce42c34686c41fbe5e93bb725bf23c" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.932199 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ld5fc"] Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.938071 4904 scope.go:117] "RemoveContainer" containerID="f35ea3af77b9ae3c1b0873f3835e14602f72ee8df3dd31d133fb6fe12fe55a5c" Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.944520 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ld5fc"] Nov 21 14:49:46 crc kubenswrapper[4904]: I1121 14:49:46.966404 4904 scope.go:117] "RemoveContainer" containerID="38a65a1344b717e945507184ecd98305d0a010b3f173b7412f0bad2746a15549" Nov 21 14:49:48 crc kubenswrapper[4904]: I1121 14:49:48.526049 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" path="/var/lib/kubelet/pods/8230e4e3-8464-453a-abcf-8db9ad3fb5ec/volumes" Nov 21 14:50:58 crc kubenswrapper[4904]: I1121 14:50:58.113451 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:50:58 crc kubenswrapper[4904]: I1121 14:50:58.114315 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:51:28 crc kubenswrapper[4904]: I1121 14:51:28.113713 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:51:28 crc kubenswrapper[4904]: I1121 14:51:28.114599 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:51:41 crc kubenswrapper[4904]: I1121 14:51:41.291003 4904 generic.go:334] "Generic (PLEG): container finished" podID="f238fb8b-7193-4412-ac72-19c3161f2735" containerID="241efc736c5670b12173647a60beae403cc97f7444a6c8558ee371b377a267b1" exitCode=0 Nov 21 14:51:41 crc kubenswrapper[4904]: I1121 14:51:41.291162 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" event={"ID":"f238fb8b-7193-4412-ac72-19c3161f2735","Type":"ContainerDied","Data":"241efc736c5670b12173647a60beae403cc97f7444a6c8558ee371b377a267b1"} Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.814962 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979131 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-0\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979194 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-ceph-nova-0\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979254 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-nova-extra-config-0\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979323 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-1\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979415 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8svr7\" (UniqueName: \"kubernetes.io/projected/f238fb8b-7193-4412-ac72-19c3161f2735-kube-api-access-8svr7\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979436 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-0\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979496 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ssh-key\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979523 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ceph\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979576 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-1\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979626 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-inventory\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.979775 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-custom-ceph-combined-ca-bundle\") pod \"f238fb8b-7193-4412-ac72-19c3161f2735\" (UID: \"f238fb8b-7193-4412-ac72-19c3161f2735\") " Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.986238 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ceph" (OuterVolumeSpecName: "ceph") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.989522 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:42 crc kubenswrapper[4904]: I1121 14:51:42.992767 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f238fb8b-7193-4412-ac72-19c3161f2735-kube-api-access-8svr7" (OuterVolumeSpecName: "kube-api-access-8svr7") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "kube-api-access-8svr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.015320 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.016556 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.020201 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.020883 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-inventory" (OuterVolumeSpecName: "inventory") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.023924 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.028981 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.030629 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.041628 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f238fb8b-7193-4412-ac72-19c3161f2735" (UID: "f238fb8b-7193-4412-ac72-19c3161f2735"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083323 4904 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083398 4904 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f238fb8b-7193-4412-ac72-19c3161f2735-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083415 4904 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083427 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8svr7\" (UniqueName: \"kubernetes.io/projected/f238fb8b-7193-4412-ac72-19c3161f2735-kube-api-access-8svr7\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083436 4904 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083445 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083454 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083462 4904 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083470 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083480 4904 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.083493 4904 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f238fb8b-7193-4412-ac72-19c3161f2735-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.313751 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" event={"ID":"f238fb8b-7193-4412-ac72-19c3161f2735","Type":"ContainerDied","Data":"f0219cff23bdd419e3cdb27dd4c55a4f6092741b4923928942580f3545d58235"} Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.314081 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0219cff23bdd419e3cdb27dd4c55a4f6092741b4923928942580f3545d58235" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.313824 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.429267 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz"] Nov 21 14:51:43 crc kubenswrapper[4904]: E1121 14:51:43.429800 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f238fb8b-7193-4412-ac72-19c3161f2735" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.429815 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f238fb8b-7193-4412-ac72-19c3161f2735" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 21 14:51:43 crc kubenswrapper[4904]: E1121 14:51:43.429831 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="extract-utilities" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.429837 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="extract-utilities" Nov 21 14:51:43 crc kubenswrapper[4904]: E1121 14:51:43.429854 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="registry-server" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.429861 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="registry-server" Nov 21 14:51:43 crc kubenswrapper[4904]: E1121 14:51:43.429884 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="extract-content" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.429892 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="extract-content" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.430159 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8230e4e3-8464-453a-abcf-8db9ad3fb5ec" containerName="registry-server" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.430208 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f238fb8b-7193-4412-ac72-19c3161f2735" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.431305 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.434517 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.434751 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.434854 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.436487 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.436732 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.441545 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.446597 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz"] Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.601455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.601523 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceph\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.601614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.601752 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.602831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.602896 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/168c7941-23ec-43f5-8849-04a31a928d0a-kube-api-access-nr5fw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.603057 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.603186 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.705596 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.705709 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/168c7941-23ec-43f5-8849-04a31a928d0a-kube-api-access-nr5fw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.705779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.705825 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.705925 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.705954 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceph\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.706022 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.706122 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.711780 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.711810 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.711981 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.712370 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.713352 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.714402 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceph\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.720935 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.724228 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/168c7941-23ec-43f5-8849-04a31a928d0a-kube-api-access-nr5fw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-49lxz\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:43 crc kubenswrapper[4904]: I1121 14:51:43.761699 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:51:44 crc kubenswrapper[4904]: I1121 14:51:44.350599 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz"] Nov 21 14:51:44 crc kubenswrapper[4904]: I1121 14:51:44.356930 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:51:45 crc kubenswrapper[4904]: I1121 14:51:45.335752 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" event={"ID":"168c7941-23ec-43f5-8849-04a31a928d0a","Type":"ContainerStarted","Data":"1d15f636a650cfc6313aaa9a75ae516cdb195f38bb6eccbe9f5dcabf63ea86e5"} Nov 21 14:51:46 crc kubenswrapper[4904]: I1121 14:51:46.347196 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" event={"ID":"168c7941-23ec-43f5-8849-04a31a928d0a","Type":"ContainerStarted","Data":"a0be45abe6386e3ad85c11eea5e2ab06ffde7708a824d6288410bfaf1b538820"} Nov 21 14:51:46 crc kubenswrapper[4904]: I1121 14:51:46.368751 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" podStartSLOduration=2.594752295 podStartE2EDuration="3.368723459s" podCreationTimestamp="2025-11-21 14:51:43 +0000 UTC" firstStartedPulling="2025-11-21 14:51:44.356647978 +0000 UTC m=+4778.478180530" lastFinishedPulling="2025-11-21 14:51:45.130619142 +0000 UTC m=+4779.252151694" observedRunningTime="2025-11-21 14:51:46.36423526 +0000 UTC m=+4780.485767832" watchObservedRunningTime="2025-11-21 14:51:46.368723459 +0000 UTC m=+4780.490256011" Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.113413 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.114274 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.114332 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.115123 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f80a54e3a15fbdc1f6bcdf28edd81f1a732b02b6d7f8869201c38845d1200fa8"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.115194 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://f80a54e3a15fbdc1f6bcdf28edd81f1a732b02b6d7f8869201c38845d1200fa8" gracePeriod=600 Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.492967 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="f80a54e3a15fbdc1f6bcdf28edd81f1a732b02b6d7f8869201c38845d1200fa8" exitCode=0 Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.493048 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"f80a54e3a15fbdc1f6bcdf28edd81f1a732b02b6d7f8869201c38845d1200fa8"} Nov 21 14:51:58 crc kubenswrapper[4904]: I1121 14:51:58.493440 4904 scope.go:117] "RemoveContainer" containerID="17798a678d4c762965d90a4c8d6ab9892e9f4b8d850eb2e8c63d60b25da125f7" Nov 21 14:51:59 crc kubenswrapper[4904]: I1121 14:51:59.526521 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988"} Nov 21 14:53:58 crc kubenswrapper[4904]: I1121 14:53:58.114297 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:53:58 crc kubenswrapper[4904]: I1121 14:53:58.114918 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:54:28 crc kubenswrapper[4904]: I1121 14:54:28.114198 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:54:28 crc kubenswrapper[4904]: I1121 14:54:28.114870 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.114076 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.114759 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.114816 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.115836 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.115891 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" gracePeriod=600 Nov 21 14:54:58 crc kubenswrapper[4904]: E1121 14:54:58.243934 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.592729 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" exitCode=0 Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.592783 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988"} Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.592828 4904 scope.go:117] "RemoveContainer" containerID="f80a54e3a15fbdc1f6bcdf28edd81f1a732b02b6d7f8869201c38845d1200fa8" Nov 21 14:54:58 crc kubenswrapper[4904]: I1121 14:54:58.593535 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:54:58 crc kubenswrapper[4904]: E1121 14:54:58.593961 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:55:13 crc kubenswrapper[4904]: I1121 14:55:13.514246 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:55:13 crc kubenswrapper[4904]: E1121 14:55:13.515572 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:55:26 crc kubenswrapper[4904]: I1121 14:55:26.523006 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:55:26 crc kubenswrapper[4904]: E1121 14:55:26.524039 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:55:41 crc kubenswrapper[4904]: I1121 14:55:41.514908 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:55:41 crc kubenswrapper[4904]: E1121 14:55:41.516603 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:55:56 crc kubenswrapper[4904]: I1121 14:55:56.513526 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:55:56 crc kubenswrapper[4904]: E1121 14:55:56.514354 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:56:09 crc kubenswrapper[4904]: I1121 14:56:09.513712 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:56:09 crc kubenswrapper[4904]: E1121 14:56:09.514586 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:56:22 crc kubenswrapper[4904]: I1121 14:56:22.513104 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:56:22 crc kubenswrapper[4904]: E1121 14:56:22.513943 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:56:34 crc kubenswrapper[4904]: I1121 14:56:34.513683 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:56:34 crc kubenswrapper[4904]: E1121 14:56:34.514800 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:56:48 crc kubenswrapper[4904]: I1121 14:56:48.512999 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:56:48 crc kubenswrapper[4904]: E1121 14:56:48.513759 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:56:52 crc kubenswrapper[4904]: I1121 14:56:52.456316 4904 generic.go:334] "Generic (PLEG): container finished" podID="168c7941-23ec-43f5-8849-04a31a928d0a" containerID="a0be45abe6386e3ad85c11eea5e2ab06ffde7708a824d6288410bfaf1b538820" exitCode=0 Nov 21 14:56:52 crc kubenswrapper[4904]: I1121 14:56:52.456454 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" event={"ID":"168c7941-23ec-43f5-8849-04a31a928d0a","Type":"ContainerDied","Data":"a0be45abe6386e3ad85c11eea5e2ab06ffde7708a824d6288410bfaf1b538820"} Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.920359 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988006 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/168c7941-23ec-43f5-8849-04a31a928d0a-kube-api-access-nr5fw\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988076 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-0\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988184 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-inventory\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988359 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ssh-key\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988502 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-2\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988573 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceph\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988619 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-1\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.988639 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-telemetry-combined-ca-bundle\") pod \"168c7941-23ec-43f5-8849-04a31a928d0a\" (UID: \"168c7941-23ec-43f5-8849-04a31a928d0a\") " Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.994363 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168c7941-23ec-43f5-8849-04a31a928d0a-kube-api-access-nr5fw" (OuterVolumeSpecName: "kube-api-access-nr5fw") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "kube-api-access-nr5fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:56:53 crc kubenswrapper[4904]: I1121 14:56:53.994558 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceph" (OuterVolumeSpecName: "ceph") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.007940 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.019743 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.023411 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.024702 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-inventory" (OuterVolumeSpecName: "inventory") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.025157 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.027119 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "168c7941-23ec-43f5-8849-04a31a928d0a" (UID: "168c7941-23ec-43f5-8849-04a31a928d0a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092221 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092259 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092270 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092284 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092295 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr5fw\" (UniqueName: \"kubernetes.io/projected/168c7941-23ec-43f5-8849-04a31a928d0a-kube-api-access-nr5fw\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092306 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092620 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.092643 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168c7941-23ec-43f5-8849-04a31a928d0a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.478277 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" event={"ID":"168c7941-23ec-43f5-8849-04a31a928d0a","Type":"ContainerDied","Data":"1d15f636a650cfc6313aaa9a75ae516cdb195f38bb6eccbe9f5dcabf63ea86e5"} Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.478683 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d15f636a650cfc6313aaa9a75ae516cdb195f38bb6eccbe9f5dcabf63ea86e5" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.478344 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-49lxz" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.590349 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc"] Nov 21 14:56:54 crc kubenswrapper[4904]: E1121 14:56:54.590878 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168c7941-23ec-43f5-8849-04a31a928d0a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.590899 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="168c7941-23ec-43f5-8849-04a31a928d0a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.591148 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="168c7941-23ec-43f5-8849-04a31a928d0a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.591929 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.593950 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.594034 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.594091 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.593953 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.598066 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.598393 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.603986 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604081 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604149 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604230 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceph\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604258 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604315 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604385 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z4qn\" (UniqueName: \"kubernetes.io/projected/82b0bfd1-2b9d-48c8-89dd-74db2d011083-kube-api-access-8z4qn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.604524 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.626835 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc"] Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.705888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.705972 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.706004 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.706040 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.706085 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceph\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.706106 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.706128 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.706261 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z4qn\" (UniqueName: \"kubernetes.io/projected/82b0bfd1-2b9d-48c8-89dd-74db2d011083-kube-api-access-8z4qn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.710213 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.710215 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.710390 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.719222 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.719398 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.719449 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceph\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.719526 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.723892 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z4qn\" (UniqueName: \"kubernetes.io/projected/82b0bfd1-2b9d-48c8-89dd-74db2d011083-kube-api-access-8z4qn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:54 crc kubenswrapper[4904]: I1121 14:56:54.917762 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:56:55 crc kubenswrapper[4904]: I1121 14:56:55.448014 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 14:56:55 crc kubenswrapper[4904]: I1121 14:56:55.451650 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc"] Nov 21 14:56:55 crc kubenswrapper[4904]: I1121 14:56:55.488644 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" event={"ID":"82b0bfd1-2b9d-48c8-89dd-74db2d011083","Type":"ContainerStarted","Data":"1f6bd6f64660d802fdb809443822671b5fbcd5c517b0902d2946cf028428a6d5"} Nov 21 14:56:56 crc kubenswrapper[4904]: I1121 14:56:56.510431 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" event={"ID":"82b0bfd1-2b9d-48c8-89dd-74db2d011083","Type":"ContainerStarted","Data":"bb215d419ac2fa79b8932d42bc6a9da85624146b1b1090bf0f7e552522ff158d"} Nov 21 14:56:56 crc kubenswrapper[4904]: I1121 14:56:56.533873 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" podStartSLOduration=1.99191634 podStartE2EDuration="2.533853105s" podCreationTimestamp="2025-11-21 14:56:54 +0000 UTC" firstStartedPulling="2025-11-21 14:56:55.447819112 +0000 UTC m=+5089.569351664" lastFinishedPulling="2025-11-21 14:56:55.989755867 +0000 UTC m=+5090.111288429" observedRunningTime="2025-11-21 14:56:56.529184661 +0000 UTC m=+5090.650717213" watchObservedRunningTime="2025-11-21 14:56:56.533853105 +0000 UTC m=+5090.655385647" Nov 21 14:56:58 crc kubenswrapper[4904]: I1121 14:56:58.826613 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vl9q"] Nov 21 14:56:58 crc kubenswrapper[4904]: I1121 14:56:58.830723 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:58 crc kubenswrapper[4904]: I1121 14:56:58.838483 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vl9q"] Nov 21 14:56:58 crc kubenswrapper[4904]: I1121 14:56:58.902333 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnbt\" (UniqueName: \"kubernetes.io/projected/3506250e-612b-48ad-ac5a-1a941baf75d4-kube-api-access-5wnbt\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:58 crc kubenswrapper[4904]: I1121 14:56:58.902525 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-catalog-content\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:58 crc kubenswrapper[4904]: I1121 14:56:58.902557 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-utilities\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.003646 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-catalog-content\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.003701 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-utilities\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.003769 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnbt\" (UniqueName: \"kubernetes.io/projected/3506250e-612b-48ad-ac5a-1a941baf75d4-kube-api-access-5wnbt\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.004221 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-catalog-content\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.004408 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-utilities\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.022261 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnbt\" (UniqueName: \"kubernetes.io/projected/3506250e-612b-48ad-ac5a-1a941baf75d4-kube-api-access-5wnbt\") pod \"redhat-marketplace-6vl9q\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.152968 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:56:59 crc kubenswrapper[4904]: I1121 14:56:59.606123 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vl9q"] Nov 21 14:57:00 crc kubenswrapper[4904]: I1121 14:57:00.548549 4904 generic.go:334] "Generic (PLEG): container finished" podID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerID="c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc" exitCode=0 Nov 21 14:57:00 crc kubenswrapper[4904]: I1121 14:57:00.548602 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerDied","Data":"c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc"} Nov 21 14:57:00 crc kubenswrapper[4904]: I1121 14:57:00.548630 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerStarted","Data":"92b083ff14f12bd276643f63bf7f7316879dfc02ab6936cc1bd7f63d1257847b"} Nov 21 14:57:01 crc kubenswrapper[4904]: I1121 14:57:01.514129 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:57:01 crc kubenswrapper[4904]: E1121 14:57:01.514682 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:57:01 crc kubenswrapper[4904]: I1121 14:57:01.559350 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerStarted","Data":"e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd"} Nov 21 14:57:03 crc kubenswrapper[4904]: I1121 14:57:03.599926 4904 generic.go:334] "Generic (PLEG): container finished" podID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerID="e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd" exitCode=0 Nov 21 14:57:03 crc kubenswrapper[4904]: I1121 14:57:03.600501 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerDied","Data":"e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd"} Nov 21 14:57:04 crc kubenswrapper[4904]: I1121 14:57:04.615064 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerStarted","Data":"e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90"} Nov 21 14:57:04 crc kubenswrapper[4904]: I1121 14:57:04.643533 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vl9q" podStartSLOduration=2.979628505 podStartE2EDuration="6.643506033s" podCreationTimestamp="2025-11-21 14:56:58 +0000 UTC" firstStartedPulling="2025-11-21 14:57:00.551066743 +0000 UTC m=+5094.672599295" lastFinishedPulling="2025-11-21 14:57:04.214944281 +0000 UTC m=+5098.336476823" observedRunningTime="2025-11-21 14:57:04.635777305 +0000 UTC m=+5098.757309877" watchObservedRunningTime="2025-11-21 14:57:04.643506033 +0000 UTC m=+5098.765038585" Nov 21 14:57:09 crc kubenswrapper[4904]: I1121 14:57:09.153435 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:57:09 crc kubenswrapper[4904]: I1121 14:57:09.153974 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:57:09 crc kubenswrapper[4904]: I1121 14:57:09.499716 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:57:09 crc kubenswrapper[4904]: I1121 14:57:09.709705 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:57:09 crc kubenswrapper[4904]: I1121 14:57:09.761160 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vl9q"] Nov 21 14:57:11 crc kubenswrapper[4904]: I1121 14:57:11.681712 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vl9q" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="registry-server" containerID="cri-o://e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90" gracePeriod=2 Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.177876 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.318283 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-utilities\") pod \"3506250e-612b-48ad-ac5a-1a941baf75d4\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.318606 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wnbt\" (UniqueName: \"kubernetes.io/projected/3506250e-612b-48ad-ac5a-1a941baf75d4-kube-api-access-5wnbt\") pod \"3506250e-612b-48ad-ac5a-1a941baf75d4\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.318693 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-catalog-content\") pod \"3506250e-612b-48ad-ac5a-1a941baf75d4\" (UID: \"3506250e-612b-48ad-ac5a-1a941baf75d4\") " Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.319627 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-utilities" (OuterVolumeSpecName: "utilities") pod "3506250e-612b-48ad-ac5a-1a941baf75d4" (UID: "3506250e-612b-48ad-ac5a-1a941baf75d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.325035 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3506250e-612b-48ad-ac5a-1a941baf75d4-kube-api-access-5wnbt" (OuterVolumeSpecName: "kube-api-access-5wnbt") pod "3506250e-612b-48ad-ac5a-1a941baf75d4" (UID: "3506250e-612b-48ad-ac5a-1a941baf75d4"). InnerVolumeSpecName "kube-api-access-5wnbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.336936 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3506250e-612b-48ad-ac5a-1a941baf75d4" (UID: "3506250e-612b-48ad-ac5a-1a941baf75d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.422789 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.423120 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wnbt\" (UniqueName: \"kubernetes.io/projected/3506250e-612b-48ad-ac5a-1a941baf75d4-kube-api-access-5wnbt\") on node \"crc\" DevicePath \"\"" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.423134 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3506250e-612b-48ad-ac5a-1a941baf75d4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.514327 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:57:12 crc kubenswrapper[4904]: E1121 14:57:12.514584 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.729032 4904 generic.go:334] "Generic (PLEG): container finished" podID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerID="e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90" exitCode=0 Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.729148 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerDied","Data":"e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90"} Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.729200 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vl9q" event={"ID":"3506250e-612b-48ad-ac5a-1a941baf75d4","Type":"ContainerDied","Data":"92b083ff14f12bd276643f63bf7f7316879dfc02ab6936cc1bd7f63d1257847b"} Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.729250 4904 scope.go:117] "RemoveContainer" containerID="e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.729648 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vl9q" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.761511 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vl9q"] Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.762808 4904 scope.go:117] "RemoveContainer" containerID="e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.771516 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vl9q"] Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.785046 4904 scope.go:117] "RemoveContainer" containerID="c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.836280 4904 scope.go:117] "RemoveContainer" containerID="e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90" Nov 21 14:57:12 crc kubenswrapper[4904]: E1121 14:57:12.837002 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90\": container with ID starting with e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90 not found: ID does not exist" containerID="e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.837043 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90"} err="failed to get container status \"e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90\": rpc error: code = NotFound desc = could not find container \"e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90\": container with ID starting with e90375695c62644ed31cf9a2b795d4abc2174b4209148a4dfaa6fc0129785b90 not found: ID does not exist" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.837069 4904 scope.go:117] "RemoveContainer" containerID="e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd" Nov 21 14:57:12 crc kubenswrapper[4904]: E1121 14:57:12.837549 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd\": container with ID starting with e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd not found: ID does not exist" containerID="e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.837577 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd"} err="failed to get container status \"e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd\": rpc error: code = NotFound desc = could not find container \"e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd\": container with ID starting with e21853564f681a0c426db8bf8c4de2791cf38c7890bac16fa1beb1ef81f2cccd not found: ID does not exist" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.837598 4904 scope.go:117] "RemoveContainer" containerID="c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc" Nov 21 14:57:12 crc kubenswrapper[4904]: E1121 14:57:12.838020 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc\": container with ID starting with c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc not found: ID does not exist" containerID="c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc" Nov 21 14:57:12 crc kubenswrapper[4904]: I1121 14:57:12.838097 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc"} err="failed to get container status \"c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc\": rpc error: code = NotFound desc = could not find container \"c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc\": container with ID starting with c2801f510b11cda0a87de05b58c32c50d4f452c27132b73c3be866aa6f40a6cc not found: ID does not exist" Nov 21 14:57:14 crc kubenswrapper[4904]: I1121 14:57:14.525874 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" path="/var/lib/kubelet/pods/3506250e-612b-48ad-ac5a-1a941baf75d4/volumes" Nov 21 14:57:24 crc kubenswrapper[4904]: I1121 14:57:24.513555 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:57:24 crc kubenswrapper[4904]: E1121 14:57:24.514438 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:57:35 crc kubenswrapper[4904]: I1121 14:57:35.513626 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:57:35 crc kubenswrapper[4904]: E1121 14:57:35.514591 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:57:48 crc kubenswrapper[4904]: I1121 14:57:48.513700 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:57:48 crc kubenswrapper[4904]: E1121 14:57:48.514439 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.439883 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-84fc2"] Nov 21 14:57:55 crc kubenswrapper[4904]: E1121 14:57:55.441149 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="extract-content" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.441166 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="extract-content" Nov 21 14:57:55 crc kubenswrapper[4904]: E1121 14:57:55.441195 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="registry-server" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.441203 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="registry-server" Nov 21 14:57:55 crc kubenswrapper[4904]: E1121 14:57:55.441227 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="extract-utilities" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.441236 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="extract-utilities" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.441449 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3506250e-612b-48ad-ac5a-1a941baf75d4" containerName="registry-server" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.443460 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.468251 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84fc2"] Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.545682 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jg6\" (UniqueName: \"kubernetes.io/projected/3685733c-5f54-4a10-980d-cb76726e5359-kube-api-access-t9jg6\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.545831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-utilities\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.545903 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-catalog-content\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.648275 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-catalog-content\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.648563 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jg6\" (UniqueName: \"kubernetes.io/projected/3685733c-5f54-4a10-980d-cb76726e5359-kube-api-access-t9jg6\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.648779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-utilities\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.648896 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-catalog-content\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.649206 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-utilities\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.669791 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jg6\" (UniqueName: \"kubernetes.io/projected/3685733c-5f54-4a10-980d-cb76726e5359-kube-api-access-t9jg6\") pod \"redhat-operators-84fc2\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:55 crc kubenswrapper[4904]: I1121 14:57:55.774899 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:57:56 crc kubenswrapper[4904]: I1121 14:57:56.319969 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84fc2"] Nov 21 14:57:57 crc kubenswrapper[4904]: I1121 14:57:57.188392 4904 generic.go:334] "Generic (PLEG): container finished" podID="3685733c-5f54-4a10-980d-cb76726e5359" containerID="6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57" exitCode=0 Nov 21 14:57:57 crc kubenswrapper[4904]: I1121 14:57:57.188461 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerDied","Data":"6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57"} Nov 21 14:57:57 crc kubenswrapper[4904]: I1121 14:57:57.188689 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerStarted","Data":"1f63e1fcdc26cdc7b331ae698a166b165a949e589f89e7b2c2be9a2500b201f7"} Nov 21 14:57:58 crc kubenswrapper[4904]: I1121 14:57:58.199355 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerStarted","Data":"3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826"} Nov 21 14:58:00 crc kubenswrapper[4904]: I1121 14:58:00.513706 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:58:00 crc kubenswrapper[4904]: E1121 14:58:00.514533 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:58:04 crc kubenswrapper[4904]: E1121 14:58:04.883325 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3685733c_5f54_4a10_980d_cb76726e5359.slice/crio-conmon-3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3685733c_5f54_4a10_980d_cb76726e5359.slice/crio-3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826.scope\": RecentStats: unable to find data in memory cache]" Nov 21 14:58:05 crc kubenswrapper[4904]: I1121 14:58:05.286350 4904 generic.go:334] "Generic (PLEG): container finished" podID="3685733c-5f54-4a10-980d-cb76726e5359" containerID="3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826" exitCode=0 Nov 21 14:58:05 crc kubenswrapper[4904]: I1121 14:58:05.286406 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerDied","Data":"3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826"} Nov 21 14:58:06 crc kubenswrapper[4904]: I1121 14:58:06.299057 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerStarted","Data":"664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683"} Nov 21 14:58:06 crc kubenswrapper[4904]: I1121 14:58:06.321833 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-84fc2" podStartSLOduration=2.7552468340000003 podStartE2EDuration="11.321812399s" podCreationTimestamp="2025-11-21 14:57:55 +0000 UTC" firstStartedPulling="2025-11-21 14:57:57.191144252 +0000 UTC m=+5151.312676794" lastFinishedPulling="2025-11-21 14:58:05.757709767 +0000 UTC m=+5159.879242359" observedRunningTime="2025-11-21 14:58:06.317116125 +0000 UTC m=+5160.438648727" watchObservedRunningTime="2025-11-21 14:58:06.321812399 +0000 UTC m=+5160.443344951" Nov 21 14:58:13 crc kubenswrapper[4904]: I1121 14:58:13.513540 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:58:13 crc kubenswrapper[4904]: E1121 14:58:13.514501 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.447116 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cfr6q"] Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.450988 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.462885 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfr6q"] Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.509740 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9k7r\" (UniqueName: \"kubernetes.io/projected/58da6b7f-9e7b-484e-9901-068c2097ae3f-kube-api-access-d9k7r\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.509894 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-catalog-content\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.509959 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-utilities\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.612809 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-catalog-content\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.613058 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-utilities\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.613119 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9k7r\" (UniqueName: \"kubernetes.io/projected/58da6b7f-9e7b-484e-9901-068c2097ae3f-kube-api-access-d9k7r\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.613414 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-catalog-content\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.614125 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-utilities\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.638954 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9k7r\" (UniqueName: \"kubernetes.io/projected/58da6b7f-9e7b-484e-9901-068c2097ae3f-kube-api-access-d9k7r\") pod \"community-operators-cfr6q\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:14 crc kubenswrapper[4904]: I1121 14:58:14.789528 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:15 crc kubenswrapper[4904]: I1121 14:58:15.386796 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfr6q"] Nov 21 14:58:15 crc kubenswrapper[4904]: I1121 14:58:15.776037 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:58:15 crc kubenswrapper[4904]: I1121 14:58:15.776897 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:58:16 crc kubenswrapper[4904]: I1121 14:58:16.412902 4904 generic.go:334] "Generic (PLEG): container finished" podID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerID="15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891" exitCode=0 Nov 21 14:58:16 crc kubenswrapper[4904]: I1121 14:58:16.413120 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerDied","Data":"15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891"} Nov 21 14:58:16 crc kubenswrapper[4904]: I1121 14:58:16.413504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerStarted","Data":"238898b3291ca91b4599803a06d02c24f509b687925e4999190fb3bc69b7547a"} Nov 21 14:58:17 crc kubenswrapper[4904]: I1121 14:58:17.021534 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-84fc2" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="registry-server" probeResult="failure" output=< Nov 21 14:58:17 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:58:17 crc kubenswrapper[4904]: > Nov 21 14:58:17 crc kubenswrapper[4904]: I1121 14:58:17.429899 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerStarted","Data":"959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60"} Nov 21 14:58:18 crc kubenswrapper[4904]: I1121 14:58:18.444371 4904 generic.go:334] "Generic (PLEG): container finished" podID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerID="959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60" exitCode=0 Nov 21 14:58:18 crc kubenswrapper[4904]: I1121 14:58:18.444798 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerDied","Data":"959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60"} Nov 21 14:58:20 crc kubenswrapper[4904]: I1121 14:58:20.471195 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerStarted","Data":"b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7"} Nov 21 14:58:20 crc kubenswrapper[4904]: I1121 14:58:20.498839 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cfr6q" podStartSLOduration=3.338120469 podStartE2EDuration="6.498805149s" podCreationTimestamp="2025-11-21 14:58:14 +0000 UTC" firstStartedPulling="2025-11-21 14:58:16.415539952 +0000 UTC m=+5170.537072504" lastFinishedPulling="2025-11-21 14:58:19.576224592 +0000 UTC m=+5173.697757184" observedRunningTime="2025-11-21 14:58:20.489681488 +0000 UTC m=+5174.611214040" watchObservedRunningTime="2025-11-21 14:58:20.498805149 +0000 UTC m=+5174.620337701" Nov 21 14:58:24 crc kubenswrapper[4904]: I1121 14:58:24.790319 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:24 crc kubenswrapper[4904]: I1121 14:58:24.790957 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:25 crc kubenswrapper[4904]: I1121 14:58:25.829690 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:58:25 crc kubenswrapper[4904]: I1121 14:58:25.841139 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cfr6q" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="registry-server" probeResult="failure" output=< Nov 21 14:58:25 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 14:58:25 crc kubenswrapper[4904]: > Nov 21 14:58:25 crc kubenswrapper[4904]: I1121 14:58:25.888597 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:58:26 crc kubenswrapper[4904]: I1121 14:58:26.527334 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:58:26 crc kubenswrapper[4904]: E1121 14:58:26.527879 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:58:28 crc kubenswrapper[4904]: I1121 14:58:28.867033 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84fc2"] Nov 21 14:58:28 crc kubenswrapper[4904]: I1121 14:58:28.867969 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-84fc2" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="registry-server" containerID="cri-o://664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683" gracePeriod=2 Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.437192 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.547849 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9jg6\" (UniqueName: \"kubernetes.io/projected/3685733c-5f54-4a10-980d-cb76726e5359-kube-api-access-t9jg6\") pod \"3685733c-5f54-4a10-980d-cb76726e5359\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.547912 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-utilities\") pod \"3685733c-5f54-4a10-980d-cb76726e5359\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.548132 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-catalog-content\") pod \"3685733c-5f54-4a10-980d-cb76726e5359\" (UID: \"3685733c-5f54-4a10-980d-cb76726e5359\") " Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.549518 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-utilities" (OuterVolumeSpecName: "utilities") pod "3685733c-5f54-4a10-980d-cb76726e5359" (UID: "3685733c-5f54-4a10-980d-cb76726e5359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.553932 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.566248 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3685733c-5f54-4a10-980d-cb76726e5359-kube-api-access-t9jg6" (OuterVolumeSpecName: "kube-api-access-t9jg6") pod "3685733c-5f54-4a10-980d-cb76726e5359" (UID: "3685733c-5f54-4a10-980d-cb76726e5359"). InnerVolumeSpecName "kube-api-access-t9jg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.570812 4904 generic.go:334] "Generic (PLEG): container finished" podID="3685733c-5f54-4a10-980d-cb76726e5359" containerID="664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683" exitCode=0 Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.570874 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerDied","Data":"664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683"} Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.570914 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84fc2" event={"ID":"3685733c-5f54-4a10-980d-cb76726e5359","Type":"ContainerDied","Data":"1f63e1fcdc26cdc7b331ae698a166b165a949e589f89e7b2c2be9a2500b201f7"} Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.570932 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84fc2" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.570948 4904 scope.go:117] "RemoveContainer" containerID="664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.646989 4904 scope.go:117] "RemoveContainer" containerID="3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.656573 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9jg6\" (UniqueName: \"kubernetes.io/projected/3685733c-5f54-4a10-980d-cb76726e5359-kube-api-access-t9jg6\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.668526 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3685733c-5f54-4a10-980d-cb76726e5359" (UID: "3685733c-5f54-4a10-980d-cb76726e5359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.675463 4904 scope.go:117] "RemoveContainer" containerID="6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.731413 4904 scope.go:117] "RemoveContainer" containerID="664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683" Nov 21 14:58:29 crc kubenswrapper[4904]: E1121 14:58:29.731994 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683\": container with ID starting with 664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683 not found: ID does not exist" containerID="664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.732045 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683"} err="failed to get container status \"664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683\": rpc error: code = NotFound desc = could not find container \"664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683\": container with ID starting with 664ac0d5ef2cc4fcec62ae48e92f381fe36aa0621c9392cfa2d7d373fa3a5683 not found: ID does not exist" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.732089 4904 scope.go:117] "RemoveContainer" containerID="3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826" Nov 21 14:58:29 crc kubenswrapper[4904]: E1121 14:58:29.732540 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826\": container with ID starting with 3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826 not found: ID does not exist" containerID="3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.732566 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826"} err="failed to get container status \"3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826\": rpc error: code = NotFound desc = could not find container \"3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826\": container with ID starting with 3473cc7fb8c927871e2d9899e3a277f5d082f500c9e0672b500fddc36d769826 not found: ID does not exist" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.732585 4904 scope.go:117] "RemoveContainer" containerID="6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57" Nov 21 14:58:29 crc kubenswrapper[4904]: E1121 14:58:29.732891 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57\": container with ID starting with 6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57 not found: ID does not exist" containerID="6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.732917 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57"} err="failed to get container status \"6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57\": rpc error: code = NotFound desc = could not find container \"6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57\": container with ID starting with 6ecf95fb5a5c46e7da470fb3b725b6da34b926515da15ad77aa3e7f37e0e4b57 not found: ID does not exist" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.759405 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3685733c-5f54-4a10-980d-cb76726e5359-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.920640 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84fc2"] Nov 21 14:58:29 crc kubenswrapper[4904]: I1121 14:58:29.936585 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-84fc2"] Nov 21 14:58:30 crc kubenswrapper[4904]: I1121 14:58:30.528018 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3685733c-5f54-4a10-980d-cb76726e5359" path="/var/lib/kubelet/pods/3685733c-5f54-4a10-980d-cb76726e5359/volumes" Nov 21 14:58:34 crc kubenswrapper[4904]: I1121 14:58:34.848858 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:34 crc kubenswrapper[4904]: I1121 14:58:34.903095 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:35 crc kubenswrapper[4904]: I1121 14:58:35.093337 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cfr6q"] Nov 21 14:58:36 crc kubenswrapper[4904]: I1121 14:58:36.651193 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cfr6q" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="registry-server" containerID="cri-o://b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7" gracePeriod=2 Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.173943 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.277692 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9k7r\" (UniqueName: \"kubernetes.io/projected/58da6b7f-9e7b-484e-9901-068c2097ae3f-kube-api-access-d9k7r\") pod \"58da6b7f-9e7b-484e-9901-068c2097ae3f\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.277760 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-utilities\") pod \"58da6b7f-9e7b-484e-9901-068c2097ae3f\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.277797 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-catalog-content\") pod \"58da6b7f-9e7b-484e-9901-068c2097ae3f\" (UID: \"58da6b7f-9e7b-484e-9901-068c2097ae3f\") " Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.278975 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-utilities" (OuterVolumeSpecName: "utilities") pod "58da6b7f-9e7b-484e-9901-068c2097ae3f" (UID: "58da6b7f-9e7b-484e-9901-068c2097ae3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.284288 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58da6b7f-9e7b-484e-9901-068c2097ae3f-kube-api-access-d9k7r" (OuterVolumeSpecName: "kube-api-access-d9k7r") pod "58da6b7f-9e7b-484e-9901-068c2097ae3f" (UID: "58da6b7f-9e7b-484e-9901-068c2097ae3f"). InnerVolumeSpecName "kube-api-access-d9k7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.366033 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58da6b7f-9e7b-484e-9901-068c2097ae3f" (UID: "58da6b7f-9e7b-484e-9901-068c2097ae3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.380900 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9k7r\" (UniqueName: \"kubernetes.io/projected/58da6b7f-9e7b-484e-9901-068c2097ae3f-kube-api-access-d9k7r\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.380942 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.380955 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58da6b7f-9e7b-484e-9901-068c2097ae3f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.663008 4904 generic.go:334] "Generic (PLEG): container finished" podID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerID="b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7" exitCode=0 Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.663062 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerDied","Data":"b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7"} Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.663094 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfr6q" event={"ID":"58da6b7f-9e7b-484e-9901-068c2097ae3f","Type":"ContainerDied","Data":"238898b3291ca91b4599803a06d02c24f509b687925e4999190fb3bc69b7547a"} Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.663098 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfr6q" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.663112 4904 scope.go:117] "RemoveContainer" containerID="b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.708925 4904 scope.go:117] "RemoveContainer" containerID="959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.720384 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cfr6q"] Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.731671 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cfr6q"] Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.742382 4904 scope.go:117] "RemoveContainer" containerID="15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.789776 4904 scope.go:117] "RemoveContainer" containerID="b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7" Nov 21 14:58:37 crc kubenswrapper[4904]: E1121 14:58:37.791226 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7\": container with ID starting with b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7 not found: ID does not exist" containerID="b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.791285 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7"} err="failed to get container status \"b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7\": rpc error: code = NotFound desc = could not find container \"b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7\": container with ID starting with b788f2e149e9276f28ac42f5ee2b59a530a02ecbcd9ae0fdab676a7a15db1df7 not found: ID does not exist" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.791355 4904 scope.go:117] "RemoveContainer" containerID="959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60" Nov 21 14:58:37 crc kubenswrapper[4904]: E1121 14:58:37.791799 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60\": container with ID starting with 959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60 not found: ID does not exist" containerID="959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.791842 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60"} err="failed to get container status \"959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60\": rpc error: code = NotFound desc = could not find container \"959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60\": container with ID starting with 959b435b7bff77021e7e80bca8f680647a7d606826ec8e95895d0252ef6bfc60 not found: ID does not exist" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.791869 4904 scope.go:117] "RemoveContainer" containerID="15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891" Nov 21 14:58:37 crc kubenswrapper[4904]: E1121 14:58:37.792316 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891\": container with ID starting with 15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891 not found: ID does not exist" containerID="15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891" Nov 21 14:58:37 crc kubenswrapper[4904]: I1121 14:58:37.792351 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891"} err="failed to get container status \"15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891\": rpc error: code = NotFound desc = could not find container \"15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891\": container with ID starting with 15523c42284fbdef0b65306311c758ffdde3afe86a83c976bf488511635d6891 not found: ID does not exist" Nov 21 14:58:38 crc kubenswrapper[4904]: I1121 14:58:38.530060 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" path="/var/lib/kubelet/pods/58da6b7f-9e7b-484e-9901-068c2097ae3f/volumes" Nov 21 14:58:39 crc kubenswrapper[4904]: I1121 14:58:39.513570 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:58:39 crc kubenswrapper[4904]: E1121 14:58:39.514300 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:58:52 crc kubenswrapper[4904]: I1121 14:58:52.517030 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:58:52 crc kubenswrapper[4904]: E1121 14:58:52.518110 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:59:03 crc kubenswrapper[4904]: I1121 14:59:03.512925 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:59:03 crc kubenswrapper[4904]: E1121 14:59:03.513711 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:59:17 crc kubenswrapper[4904]: I1121 14:59:17.514389 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:59:17 crc kubenswrapper[4904]: E1121 14:59:17.515819 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:59:32 crc kubenswrapper[4904]: I1121 14:59:32.513373 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:59:32 crc kubenswrapper[4904]: E1121 14:59:32.514146 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:59:44 crc kubenswrapper[4904]: I1121 14:59:44.514148 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 14:59:44 crc kubenswrapper[4904]: E1121 14:59:44.515603 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 14:59:52 crc kubenswrapper[4904]: I1121 14:59:52.480781 4904 generic.go:334] "Generic (PLEG): container finished" podID="82b0bfd1-2b9d-48c8-89dd-74db2d011083" containerID="bb215d419ac2fa79b8932d42bc6a9da85624146b1b1090bf0f7e552522ff158d" exitCode=0 Nov 21 14:59:52 crc kubenswrapper[4904]: I1121 14:59:52.480873 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" event={"ID":"82b0bfd1-2b9d-48c8-89dd-74db2d011083","Type":"ContainerDied","Data":"bb215d419ac2fa79b8932d42bc6a9da85624146b1b1090bf0f7e552522ff158d"} Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.062875 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.100063 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z4qn\" (UniqueName: \"kubernetes.io/projected/82b0bfd1-2b9d-48c8-89dd-74db2d011083-kube-api-access-8z4qn\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.110258 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b0bfd1-2b9d-48c8-89dd-74db2d011083-kube-api-access-8z4qn" (OuterVolumeSpecName: "kube-api-access-8z4qn") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "kube-api-access-8z4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204186 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceph\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204393 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-2\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204475 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-inventory\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204503 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-telemetry-power-monitoring-combined-ca-bundle\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204621 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ssh-key\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204815 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-1\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.204846 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-0\") pod \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\" (UID: \"82b0bfd1-2b9d-48c8-89dd-74db2d011083\") " Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.205680 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z4qn\" (UniqueName: \"kubernetes.io/projected/82b0bfd1-2b9d-48c8-89dd-74db2d011083-kube-api-access-8z4qn\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.211329 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceph" (OuterVolumeSpecName: "ceph") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.213942 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.243635 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.243689 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.245610 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.249408 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.256893 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-inventory" (OuterVolumeSpecName: "inventory") pod "82b0bfd1-2b9d-48c8-89dd-74db2d011083" (UID: "82b0bfd1-2b9d-48c8-89dd-74db2d011083"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307415 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307484 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307503 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307519 4904 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307535 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307547 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.307558 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/82b0bfd1-2b9d-48c8-89dd-74db2d011083-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.503573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" event={"ID":"82b0bfd1-2b9d-48c8-89dd-74db2d011083","Type":"ContainerDied","Data":"1f6bd6f64660d802fdb809443822671b5fbcd5c517b0902d2946cf028428a6d5"} Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.504152 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6bd6f64660d802fdb809443822671b5fbcd5c517b0902d2946cf028428a6d5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.503676 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.606745 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5"] Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607551 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b0bfd1-2b9d-48c8-89dd-74db2d011083" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607581 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b0bfd1-2b9d-48c8-89dd-74db2d011083" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607591 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="extract-utilities" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607600 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="extract-utilities" Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607626 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="extract-content" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607633 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="extract-content" Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607673 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="extract-content" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607682 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="extract-content" Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607705 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="extract-utilities" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607711 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="extract-utilities" Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607742 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="registry-server" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607747 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="registry-server" Nov 21 14:59:54 crc kubenswrapper[4904]: E1121 14:59:54.607755 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="registry-server" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.607761 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="registry-server" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.608042 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="58da6b7f-9e7b-484e-9901-068c2097ae3f" containerName="registry-server" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.608060 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b0bfd1-2b9d-48c8-89dd-74db2d011083" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.608093 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3685733c-5f54-4a10-980d-cb76726e5359" containerName="registry-server" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.609348 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.612798 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.613024 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.613295 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.613382 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6zkz4" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.613411 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.614043 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.636298 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5"] Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.719416 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.719885 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.719931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.720819 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.720853 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ceph\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.720894 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx22x\" (UniqueName: \"kubernetes.io/projected/16cb76b3-8c36-46f5-b221-df0d03da240e-kube-api-access-dx22x\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.823356 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.823541 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.823574 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.823622 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.823648 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ceph\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.823708 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx22x\" (UniqueName: \"kubernetes.io/projected/16cb76b3-8c36-46f5-b221-df0d03da240e-kube-api-access-dx22x\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.828181 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.828824 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.828837 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.831279 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.832257 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ceph\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.843223 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx22x\" (UniqueName: \"kubernetes.io/projected/16cb76b3-8c36-46f5-b221-df0d03da240e-kube-api-access-dx22x\") pod \"logging-edpm-deployment-openstack-edpm-ipam-k8zt5\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:54 crc kubenswrapper[4904]: I1121 14:59:54.931267 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 14:59:55 crc kubenswrapper[4904]: I1121 14:59:55.661039 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5"] Nov 21 14:59:56 crc kubenswrapper[4904]: I1121 14:59:56.581701 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" event={"ID":"16cb76b3-8c36-46f5-b221-df0d03da240e","Type":"ContainerStarted","Data":"ad38c1425e55acfab8de4d0ae43672a2e13c1232feba71e1a95809a13006f281"} Nov 21 14:59:57 crc kubenswrapper[4904]: I1121 14:59:57.601079 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" event={"ID":"16cb76b3-8c36-46f5-b221-df0d03da240e","Type":"ContainerStarted","Data":"fea565910793ab21cdd9606cfa7c92af0b5070a234d1c084a7cda94878fc9db0"} Nov 21 14:59:57 crc kubenswrapper[4904]: I1121 14:59:57.633270 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" podStartSLOduration=2.902794915 podStartE2EDuration="3.633247261s" podCreationTimestamp="2025-11-21 14:59:54 +0000 UTC" firstStartedPulling="2025-11-21 14:59:55.670783479 +0000 UTC m=+5269.792316031" lastFinishedPulling="2025-11-21 14:59:56.401235815 +0000 UTC m=+5270.522768377" observedRunningTime="2025-11-21 14:59:57.627321748 +0000 UTC m=+5271.748854310" watchObservedRunningTime="2025-11-21 14:59:57.633247261 +0000 UTC m=+5271.754779813" Nov 21 14:59:59 crc kubenswrapper[4904]: I1121 14:59:59.514570 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.144455 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd"] Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.146484 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.149187 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.149437 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.159133 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd"] Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.248162 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72hf\" (UniqueName: \"kubernetes.io/projected/2bd48f58-48a5-4b78-b157-b3808d8591b2-kube-api-access-z72hf\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.249059 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd48f58-48a5-4b78-b157-b3808d8591b2-secret-volume\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.249270 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd48f58-48a5-4b78-b157-b3808d8591b2-config-volume\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.352627 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd48f58-48a5-4b78-b157-b3808d8591b2-secret-volume\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.352804 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd48f58-48a5-4b78-b157-b3808d8591b2-config-volume\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.352959 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72hf\" (UniqueName: \"kubernetes.io/projected/2bd48f58-48a5-4b78-b157-b3808d8591b2-kube-api-access-z72hf\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.354080 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd48f58-48a5-4b78-b157-b3808d8591b2-config-volume\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.364621 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd48f58-48a5-4b78-b157-b3808d8591b2-secret-volume\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.373446 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72hf\" (UniqueName: \"kubernetes.io/projected/2bd48f58-48a5-4b78-b157-b3808d8591b2-kube-api-access-z72hf\") pod \"collect-profiles-29395620-p8bfd\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.473421 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:00 crc kubenswrapper[4904]: I1121 15:00:00.741347 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"25a1d25832a10a210874215d77da23ed0a585352adebb0db83f6cfc5818d8e6d"} Nov 21 15:00:01 crc kubenswrapper[4904]: I1121 15:00:01.018014 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd"] Nov 21 15:00:01 crc kubenswrapper[4904]: I1121 15:00:01.753517 4904 generic.go:334] "Generic (PLEG): container finished" podID="2bd48f58-48a5-4b78-b157-b3808d8591b2" containerID="74ba0bd4ec04cd6366c66a127fe351746ee72c016f8313c8efeda183f72446e7" exitCode=0 Nov 21 15:00:01 crc kubenswrapper[4904]: I1121 15:00:01.753714 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" event={"ID":"2bd48f58-48a5-4b78-b157-b3808d8591b2","Type":"ContainerDied","Data":"74ba0bd4ec04cd6366c66a127fe351746ee72c016f8313c8efeda183f72446e7"} Nov 21 15:00:01 crc kubenswrapper[4904]: I1121 15:00:01.753935 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" event={"ID":"2bd48f58-48a5-4b78-b157-b3808d8591b2","Type":"ContainerStarted","Data":"d01f232b1802509708af45b960f1d07a36adec26a1be3d35072bb5a4dd4eb8de"} Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.243222 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.335931 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd48f58-48a5-4b78-b157-b3808d8591b2-config-volume\") pod \"2bd48f58-48a5-4b78-b157-b3808d8591b2\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.336031 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z72hf\" (UniqueName: \"kubernetes.io/projected/2bd48f58-48a5-4b78-b157-b3808d8591b2-kube-api-access-z72hf\") pod \"2bd48f58-48a5-4b78-b157-b3808d8591b2\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.336291 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd48f58-48a5-4b78-b157-b3808d8591b2-secret-volume\") pod \"2bd48f58-48a5-4b78-b157-b3808d8591b2\" (UID: \"2bd48f58-48a5-4b78-b157-b3808d8591b2\") " Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.337632 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd48f58-48a5-4b78-b157-b3808d8591b2-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bd48f58-48a5-4b78-b157-b3808d8591b2" (UID: "2bd48f58-48a5-4b78-b157-b3808d8591b2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.346005 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd48f58-48a5-4b78-b157-b3808d8591b2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bd48f58-48a5-4b78-b157-b3808d8591b2" (UID: "2bd48f58-48a5-4b78-b157-b3808d8591b2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.346128 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd48f58-48a5-4b78-b157-b3808d8591b2-kube-api-access-z72hf" (OuterVolumeSpecName: "kube-api-access-z72hf") pod "2bd48f58-48a5-4b78-b157-b3808d8591b2" (UID: "2bd48f58-48a5-4b78-b157-b3808d8591b2"). InnerVolumeSpecName "kube-api-access-z72hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.441168 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd48f58-48a5-4b78-b157-b3808d8591b2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.441226 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd48f58-48a5-4b78-b157-b3808d8591b2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.441242 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z72hf\" (UniqueName: \"kubernetes.io/projected/2bd48f58-48a5-4b78-b157-b3808d8591b2-kube-api-access-z72hf\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.780582 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" event={"ID":"2bd48f58-48a5-4b78-b157-b3808d8591b2","Type":"ContainerDied","Data":"d01f232b1802509708af45b960f1d07a36adec26a1be3d35072bb5a4dd4eb8de"} Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.780692 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd" Nov 21 15:00:03 crc kubenswrapper[4904]: I1121 15:00:03.780739 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d01f232b1802509708af45b960f1d07a36adec26a1be3d35072bb5a4dd4eb8de" Nov 21 15:00:04 crc kubenswrapper[4904]: I1121 15:00:04.339221 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn"] Nov 21 15:00:04 crc kubenswrapper[4904]: I1121 15:00:04.351788 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395575-2bmgn"] Nov 21 15:00:04 crc kubenswrapper[4904]: I1121 15:00:04.533475 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37cbcccf-c50a-4124-9689-6dd7ba9a80f5" path="/var/lib/kubelet/pods/37cbcccf-c50a-4124-9689-6dd7ba9a80f5/volumes" Nov 21 15:00:12 crc kubenswrapper[4904]: I1121 15:00:12.910919 4904 generic.go:334] "Generic (PLEG): container finished" podID="16cb76b3-8c36-46f5-b221-df0d03da240e" containerID="fea565910793ab21cdd9606cfa7c92af0b5070a234d1c084a7cda94878fc9db0" exitCode=0 Nov 21 15:00:12 crc kubenswrapper[4904]: I1121 15:00:12.911017 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" event={"ID":"16cb76b3-8c36-46f5-b221-df0d03da240e","Type":"ContainerDied","Data":"fea565910793ab21cdd9606cfa7c92af0b5070a234d1c084a7cda94878fc9db0"} Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.475037 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.566169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-1\") pod \"16cb76b3-8c36-46f5-b221-df0d03da240e\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.566809 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-inventory\") pod \"16cb76b3-8c36-46f5-b221-df0d03da240e\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.566858 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx22x\" (UniqueName: \"kubernetes.io/projected/16cb76b3-8c36-46f5-b221-df0d03da240e-kube-api-access-dx22x\") pod \"16cb76b3-8c36-46f5-b221-df0d03da240e\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.566908 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ceph\") pod \"16cb76b3-8c36-46f5-b221-df0d03da240e\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.566961 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ssh-key\") pod \"16cb76b3-8c36-46f5-b221-df0d03da240e\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.567009 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-0\") pod \"16cb76b3-8c36-46f5-b221-df0d03da240e\" (UID: \"16cb76b3-8c36-46f5-b221-df0d03da240e\") " Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.576209 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ceph" (OuterVolumeSpecName: "ceph") pod "16cb76b3-8c36-46f5-b221-df0d03da240e" (UID: "16cb76b3-8c36-46f5-b221-df0d03da240e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.578896 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cb76b3-8c36-46f5-b221-df0d03da240e-kube-api-access-dx22x" (OuterVolumeSpecName: "kube-api-access-dx22x") pod "16cb76b3-8c36-46f5-b221-df0d03da240e" (UID: "16cb76b3-8c36-46f5-b221-df0d03da240e"). InnerVolumeSpecName "kube-api-access-dx22x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.602040 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-inventory" (OuterVolumeSpecName: "inventory") pod "16cb76b3-8c36-46f5-b221-df0d03da240e" (UID: "16cb76b3-8c36-46f5-b221-df0d03da240e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.605788 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "16cb76b3-8c36-46f5-b221-df0d03da240e" (UID: "16cb76b3-8c36-46f5-b221-df0d03da240e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.608040 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "16cb76b3-8c36-46f5-b221-df0d03da240e" (UID: "16cb76b3-8c36-46f5-b221-df0d03da240e"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.608751 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "16cb76b3-8c36-46f5-b221-df0d03da240e" (UID: "16cb76b3-8c36-46f5-b221-df0d03da240e"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.669981 4904 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.670017 4904 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-inventory\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.670027 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx22x\" (UniqueName: \"kubernetes.io/projected/16cb76b3-8c36-46f5-b221-df0d03da240e-kube-api-access-dx22x\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.670037 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.670048 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.670057 4904 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/16cb76b3-8c36-46f5-b221-df0d03da240e-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.935332 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" event={"ID":"16cb76b3-8c36-46f5-b221-df0d03da240e","Type":"ContainerDied","Data":"ad38c1425e55acfab8de4d0ae43672a2e13c1232feba71e1a95809a13006f281"} Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.935375 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad38c1425e55acfab8de4d0ae43672a2e13c1232feba71e1a95809a13006f281" Nov 21 15:00:14 crc kubenswrapper[4904]: I1121 15:00:14.936023 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-k8zt5" Nov 21 15:00:32 crc kubenswrapper[4904]: I1121 15:00:32.881303 4904 scope.go:117] "RemoveContainer" containerID="9450912097a3063fa337ba87946c002d6889d2cc6d846fbd629e2f4614cc3638" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.144267 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 21 15:00:33 crc kubenswrapper[4904]: E1121 15:00:33.144727 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd48f58-48a5-4b78-b157-b3808d8591b2" containerName="collect-profiles" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.144746 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd48f58-48a5-4b78-b157-b3808d8591b2" containerName="collect-profiles" Nov 21 15:00:33 crc kubenswrapper[4904]: E1121 15:00:33.144760 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cb76b3-8c36-46f5-b221-df0d03da240e" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.144768 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cb76b3-8c36-46f5-b221-df0d03da240e" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.145019 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd48f58-48a5-4b78-b157-b3808d8591b2" containerName="collect-profiles" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.145046 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cb76b3-8c36-46f5-b221-df0d03da240e" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.146610 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.150701 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.151061 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.159252 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.161155 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.163124 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.170540 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.193272 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210025 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-run\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210075 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210096 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-dev\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210119 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210153 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210171 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-sys\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210191 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210226 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210246 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210260 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210296 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210323 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210340 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210358 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210384 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210413 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/186f682e-35c9-47ac-8e62-264769272a1b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210479 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210504 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-run\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210532 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6tjj\" (UniqueName: \"kubernetes.io/projected/186f682e-35c9-47ac-8e62-264769272a1b-kube-api-access-q6tjj\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210559 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210583 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-lib-modules\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210599 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-config-data\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210623 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210641 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210675 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-scripts\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210746 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210768 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-ceph\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210788 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210810 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29ghk\" (UniqueName: \"kubernetes.io/projected/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-kube-api-access-29ghk\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.210876 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.312750 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/186f682e-35c9-47ac-8e62-264769272a1b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.312822 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.312848 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-run\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.312888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6tjj\" (UniqueName: \"kubernetes.io/projected/186f682e-35c9-47ac-8e62-264769272a1b-kube-api-access-q6tjj\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.312926 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.312998 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-lib-modules\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313026 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-config-data\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313059 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313082 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313106 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313129 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-scripts\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313154 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313210 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313208 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-lib-modules\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313055 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-run\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313317 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313247 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313397 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-ceph\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313437 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29ghk\" (UniqueName: \"kubernetes.io/projected/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-kube-api-access-29ghk\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313522 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313554 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-run\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313575 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313599 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-dev\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313643 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313674 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313691 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-sys\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313734 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313803 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313822 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313834 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313832 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313866 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313878 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-dev\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-run\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313939 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313966 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-dev\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.313986 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314052 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314065 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314091 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-sys\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314096 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314128 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314172 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.314200 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.315064 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.315192 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-sys\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.315223 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.315251 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.315603 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.315928 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/186f682e-35c9-47ac-8e62-264769272a1b-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.321599 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.327237 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-scripts\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.327351 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.327819 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.328031 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.329741 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-ceph\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.330785 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.332330 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29ghk\" (UniqueName: \"kubernetes.io/projected/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-kube-api-access-29ghk\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.333470 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/186f682e-35c9-47ac-8e62-264769272a1b-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.339520 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4-config-data\") pod \"cinder-backup-0\" (UID: \"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4\") " pod="openstack/cinder-backup-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.342235 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/186f682e-35c9-47ac-8e62-264769272a1b-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.342767 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6tjj\" (UniqueName: \"kubernetes.io/projected/186f682e-35c9-47ac-8e62-264769272a1b-kube-api-access-q6tjj\") pod \"cinder-volume-volume1-0\" (UID: \"186f682e-35c9-47ac-8e62-264769272a1b\") " pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.477061 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:33 crc kubenswrapper[4904]: I1121 15:00:33.490398 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.100782 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.103892 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.107292 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.109403 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.109585 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.109871 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qncwq" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.141968 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142062 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142190 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142293 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142352 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-logs\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142380 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-ceph\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142924 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.142993 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.143211 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8frv\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-kube-api-access-z8frv\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.147620 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.211201 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.217693 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.248129 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.248491 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.250758 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.250810 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.250893 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8frv\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-kube-api-access-z8frv\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.250934 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.250959 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.251001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.251045 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.251091 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-logs\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.251115 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-ceph\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.251488 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.269183 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-logs\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.269474 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.280906 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.283153 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-config-data\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.287550 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-ceph\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.307167 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-scripts\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.310702 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.337843 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8frv\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-kube-api-access-z8frv\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.351629 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-hjx5b"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.356511 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.364891 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-logs\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.364984 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-ceph\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365015 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365064 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365107 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr265\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-kube-api-access-vr265\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365136 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365184 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365202 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.365225 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.375262 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.383095 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hjx5b"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.412924 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.468185 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-ceph\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.468651 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.468814 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptk2q\" (UniqueName: \"kubernetes.io/projected/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-kube-api-access-ptk2q\") pod \"manila-db-create-hjx5b\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.468956 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.469101 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-operator-scripts\") pod \"manila-db-create-hjx5b\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.469219 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr265\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-kube-api-access-vr265\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.469350 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.469517 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.469694 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.469826 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.470002 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-logs\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.474886 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.477321 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.478232 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-logs\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.486463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.506008 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-config-data\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.515630 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.524666 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-ceph\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.606066 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-scripts\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.610811 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr265\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-kube-api-access-vr265\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.615736 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptk2q\" (UniqueName: \"kubernetes.io/projected/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-kube-api-access-ptk2q\") pod \"manila-db-create-hjx5b\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.616091 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-operator-scripts\") pod \"manila-db-create-hjx5b\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.619638 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-operator-scripts\") pod \"manila-db-create-hjx5b\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.733667 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.742457 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-d760-account-create-z7s4m"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.751286 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.771612 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptk2q\" (UniqueName: \"kubernetes.io/projected/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-kube-api-access-ptk2q\") pod \"manila-db-create-hjx5b\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.784546 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.788769 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.846414 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.846916 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d760-account-create-z7s4m"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.910217 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzz74\" (UniqueName: \"kubernetes.io/projected/92872ce6-2457-4262-9960-71617702e973-kube-api-access-pzz74\") pod \"manila-d760-account-create-z7s4m\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.910381 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92872ce6-2457-4262-9960-71617702e973-operator-scripts\") pod \"manila-d760-account-create-z7s4m\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.972746 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c88ff5bcc-k5zxp"] Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.975462 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.987141 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.987349 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 21 15:00:34 crc kubenswrapper[4904]: I1121 15:00:34.987536 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.001325 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tpzdd" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.003451 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c88ff5bcc-k5zxp"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.017973 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzz74\" (UniqueName: \"kubernetes.io/projected/92872ce6-2457-4262-9960-71617702e973-kube-api-access-pzz74\") pod \"manila-d760-account-create-z7s4m\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.018035 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92872ce6-2457-4262-9960-71617702e973-operator-scripts\") pod \"manila-d760-account-create-z7s4m\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.019687 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92872ce6-2457-4262-9960-71617702e973-operator-scripts\") pod \"manila-d760-account-create-z7s4m\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.048020 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.067333 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzz74\" (UniqueName: \"kubernetes.io/projected/92872ce6-2457-4262-9960-71617702e973-kube-api-access-pzz74\") pod \"manila-d760-account-create-z7s4m\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.073761 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.087310 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.122856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b824537a-619a-48a7-bf66-4e9639582be0-logs\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.122931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-scripts\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.123019 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t6zz\" (UniqueName: \"kubernetes.io/projected/b824537a-619a-48a7-bf66-4e9639582be0-kube-api-access-7t6zz\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.123045 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b824537a-619a-48a7-bf66-4e9639582be0-horizon-secret-key\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.123097 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-config-data\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.144735 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.161203 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5768cf7b89-vmm4n"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.164557 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.189604 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5768cf7b89-vmm4n"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.189847 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.225643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"186f682e-35c9-47ac-8e62-264769272a1b","Type":"ContainerStarted","Data":"0e785e5f800742a168717ba4c97d3b2b06679a7a270e311a2bd6a5bbe8326b3b"} Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227613 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-scripts\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227724 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61534ca3-fad5-4d18-80cf-0331ec454e86-horizon-secret-key\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227766 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t6zz\" (UniqueName: \"kubernetes.io/projected/b824537a-619a-48a7-bf66-4e9639582be0-kube-api-access-7t6zz\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227794 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b824537a-619a-48a7-bf66-4e9639582be0-horizon-secret-key\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227821 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-config-data\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227841 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-config-data\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227892 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chsh8\" (UniqueName: \"kubernetes.io/projected/61534ca3-fad5-4d18-80cf-0331ec454e86-kube-api-access-chsh8\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227914 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61534ca3-fad5-4d18-80cf-0331ec454e86-logs\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.227977 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-scripts\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.228001 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b824537a-619a-48a7-bf66-4e9639582be0-logs\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.228619 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b824537a-619a-48a7-bf66-4e9639582be0-logs\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.247531 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-config-data\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.250767 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-scripts\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.269570 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b824537a-619a-48a7-bf66-4e9639582be0-horizon-secret-key\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.281477 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.283011 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t6zz\" (UniqueName: \"kubernetes.io/projected/b824537a-619a-48a7-bf66-4e9639582be0-kube-api-access-7t6zz\") pod \"horizon-c88ff5bcc-k5zxp\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.323512 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.337158 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61534ca3-fad5-4d18-80cf-0331ec454e86-horizon-secret-key\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.337295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-config-data\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.337399 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chsh8\" (UniqueName: \"kubernetes.io/projected/61534ca3-fad5-4d18-80cf-0331ec454e86-kube-api-access-chsh8\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.337442 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61534ca3-fad5-4d18-80cf-0331ec454e86-logs\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.337550 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-scripts\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.345627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61534ca3-fad5-4d18-80cf-0331ec454e86-horizon-secret-key\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.346377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61534ca3-fad5-4d18-80cf-0331ec454e86-logs\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.346500 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-config-data\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.338611 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-scripts\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.364581 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chsh8\" (UniqueName: \"kubernetes.io/projected/61534ca3-fad5-4d18-80cf-0331ec454e86-kube-api-access-chsh8\") pod \"horizon-5768cf7b89-vmm4n\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.528146 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.605893 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hjx5b"] Nov 21 15:00:35 crc kubenswrapper[4904]: I1121 15:00:35.833619 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.057779 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-d760-account-create-z7s4m"] Nov 21 15:00:36 crc kubenswrapper[4904]: W1121 15:00:36.090315 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92872ce6_2457_4262_9960_71617702e973.slice/crio-fe1e94e87d058589a28f50388ad3e3dcc5ef98a5a451f040a7859cd15d749c27 WatchSource:0}: Error finding container fe1e94e87d058589a28f50388ad3e3dcc5ef98a5a451f040a7859cd15d749c27: Status 404 returned error can't find the container with id fe1e94e87d058589a28f50388ad3e3dcc5ef98a5a451f040a7859cd15d749c27 Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.249215 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d760-account-create-z7s4m" event={"ID":"92872ce6-2457-4262-9960-71617702e973","Type":"ContainerStarted","Data":"fe1e94e87d058589a28f50388ad3e3dcc5ef98a5a451f040a7859cd15d749c27"} Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.263048 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1539aa29-c6fa-433e-8dae-068a77a07fce","Type":"ContainerStarted","Data":"63abec01076acc02ce27c8e0b78a1245656e901e7811799b1390e4af6ca2b418"} Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.269324 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4","Type":"ContainerStarted","Data":"a8c234ea25a58d5185edb475e371e4da540300a46703f65077cf06f11582abdc"} Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.289113 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hjx5b" event={"ID":"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5","Type":"ContainerStarted","Data":"1eb033ad46e6fd4cb0676b9fd68869c9108de90864b6908a7b0183a4a7aa12cb"} Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.289197 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hjx5b" event={"ID":"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5","Type":"ContainerStarted","Data":"5b6d5a939bea6fea1c78c8b20cc039c10c0d7d7b4c40fce28de502f5bcfebb04"} Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.294351 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c88ff5bcc-k5zxp"] Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.328485 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-hjx5b" podStartSLOduration=2.328450434 podStartE2EDuration="2.328450434s" podCreationTimestamp="2025-11-21 15:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:36.309707581 +0000 UTC m=+5310.431240133" watchObservedRunningTime="2025-11-21 15:00:36.328450434 +0000 UTC m=+5310.449982976" Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.457508 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:36 crc kubenswrapper[4904]: I1121 15:00:36.491915 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5768cf7b89-vmm4n"] Nov 21 15:00:36 crc kubenswrapper[4904]: W1121 15:00:36.541838 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61534ca3_fad5_4d18_80cf_0331ec454e86.slice/crio-dbcd5113605f69e179b54d42ca12914fa48c7b8582e8ef153b698bbceda1d652 WatchSource:0}: Error finding container dbcd5113605f69e179b54d42ca12914fa48c7b8582e8ef153b698bbceda1d652: Status 404 returned error can't find the container with id dbcd5113605f69e179b54d42ca12914fa48c7b8582e8ef153b698bbceda1d652 Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.313938 4904 generic.go:334] "Generic (PLEG): container finished" podID="92872ce6-2457-4262-9960-71617702e973" containerID="7ab5f5c29cb0c108eb4a44553672ecc0393b3ee693e8e848113609703df238a4" exitCode=0 Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.314095 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d760-account-create-z7s4m" event={"ID":"92872ce6-2457-4262-9960-71617702e973","Type":"ContainerDied","Data":"7ab5f5c29cb0c108eb4a44553672ecc0393b3ee693e8e848113609703df238a4"} Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.322205 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88ff5bcc-k5zxp" event={"ID":"b824537a-619a-48a7-bf66-4e9639582be0","Type":"ContainerStarted","Data":"2b004c0ed7aceae34372834c50268f6dc5e41524eea77b61cd7ed6b07b167ad4"} Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.364193 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"421da29a-22fd-4c17-afb3-ef6e29117a19","Type":"ContainerStarted","Data":"d122f919c0c8aba0dfd2622614cf203a27282afd8a627b1eaf5f7e8a878d2c5b"} Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.372485 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"186f682e-35c9-47ac-8e62-264769272a1b","Type":"ContainerStarted","Data":"270dc01f62c24c0e6fa45d3a06209b02f7fa54b02a26fca16a3a064f5a7bfd37"} Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.388611 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5768cf7b89-vmm4n" event={"ID":"61534ca3-fad5-4d18-80cf-0331ec454e86","Type":"ContainerStarted","Data":"dbcd5113605f69e179b54d42ca12914fa48c7b8582e8ef153b698bbceda1d652"} Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.396887 4904 generic.go:334] "Generic (PLEG): container finished" podID="594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" containerID="1eb033ad46e6fd4cb0676b9fd68869c9108de90864b6908a7b0183a4a7aa12cb" exitCode=0 Nov 21 15:00:37 crc kubenswrapper[4904]: I1121 15:00:37.396979 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hjx5b" event={"ID":"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5","Type":"ContainerDied","Data":"1eb033ad46e6fd4cb0676b9fd68869c9108de90864b6908a7b0183a4a7aa12cb"} Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.462486 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"421da29a-22fd-4c17-afb3-ef6e29117a19","Type":"ContainerStarted","Data":"f995a0e7d0d9962c047e55fbc84b15ddfb14b8f413515b2a9bfea922ff636345"} Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.478573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1539aa29-c6fa-433e-8dae-068a77a07fce","Type":"ContainerStarted","Data":"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084"} Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.491892 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"186f682e-35c9-47ac-8e62-264769272a1b","Type":"ContainerStarted","Data":"4e645ad18ba7f2fcec00c3796f066ae93a5ea385dd94521e584c9e4bda2a547d"} Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.552645 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4","Type":"ContainerStarted","Data":"d746f8af11074a087dbac35ca37a1c9f66dd2783bbf920af1eb8ab41099b8a87"} Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.552711 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4","Type":"ContainerStarted","Data":"c4551d367f0b427942d6cf379c0bf2bd71b471380683bf36285f6443214716e6"} Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.576417 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.226877067 podStartE2EDuration="5.576383566s" podCreationTimestamp="2025-11-21 15:00:33 +0000 UTC" firstStartedPulling="2025-11-21 15:00:34.892923334 +0000 UTC m=+5309.014455886" lastFinishedPulling="2025-11-21 15:00:36.242429833 +0000 UTC m=+5310.363962385" observedRunningTime="2025-11-21 15:00:38.557222612 +0000 UTC m=+5312.678755164" watchObservedRunningTime="2025-11-21 15:00:38.576383566 +0000 UTC m=+5312.697916128" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.613418 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.298577442 podStartE2EDuration="5.613392941s" podCreationTimestamp="2025-11-21 15:00:33 +0000 UTC" firstStartedPulling="2025-11-21 15:00:35.320362788 +0000 UTC m=+5309.441895330" lastFinishedPulling="2025-11-21 15:00:36.635178277 +0000 UTC m=+5310.756710829" observedRunningTime="2025-11-21 15:00:38.611773412 +0000 UTC m=+5312.733305984" watchObservedRunningTime="2025-11-21 15:00:38.613392941 +0000 UTC m=+5312.734925493" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.729428 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c88ff5bcc-k5zxp"] Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.792365 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-648f66fff6-bp2z7"] Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.794325 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.797350 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.857898 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-648f66fff6-bp2z7"] Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.903153 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5768cf7b89-vmm4n"] Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.965752 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-secret-key\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.965830 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cd8e52-9048-40c0-8f62-b36f39433908-logs\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.965875 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-scripts\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.965892 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-tls-certs\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.965931 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kx9\" (UniqueName: \"kubernetes.io/projected/95cd8e52-9048-40c0-8f62-b36f39433908-kube-api-access-b8kx9\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.965972 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-combined-ca-bundle\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:38 crc kubenswrapper[4904]: I1121 15:00:38.966010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-config-data\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.000951 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-695bd477bb-gcrxw"] Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.003890 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.015005 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-695bd477bb-gcrxw"] Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.068559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-tls-certs\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.068676 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kx9\" (UniqueName: \"kubernetes.io/projected/95cd8e52-9048-40c0-8f62-b36f39433908-kube-api-access-b8kx9\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.068740 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-combined-ca-bundle\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.068787 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-config-data\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.068893 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-secret-key\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.068932 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cd8e52-9048-40c0-8f62-b36f39433908-logs\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.069040 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-scripts\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.070030 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-scripts\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.070274 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cd8e52-9048-40c0-8f62-b36f39433908-logs\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.070692 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-config-data\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.076979 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-tls-certs\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.088255 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-combined-ca-bundle\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.096391 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-secret-key\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.101837 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kx9\" (UniqueName: \"kubernetes.io/projected/95cd8e52-9048-40c0-8f62-b36f39433908-kube-api-access-b8kx9\") pod \"horizon-648f66fff6-bp2z7\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.145745 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.174195 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-horizon-secret-key\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.174729 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc8baac-d51a-42f4-9444-c8e4172be134-logs\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.174806 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-combined-ca-bundle\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.174897 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fc8baac-d51a-42f4-9444-c8e4172be134-config-data\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.174935 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-horizon-tls-certs\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.174974 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8baac-d51a-42f4-9444-c8e4172be134-scripts\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.175096 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghq7b\" (UniqueName: \"kubernetes.io/projected/7fc8baac-d51a-42f4-9444-c8e4172be134-kube-api-access-ghq7b\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.288088 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fc8baac-d51a-42f4-9444-c8e4172be134-config-data\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.288159 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-horizon-tls-certs\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.289710 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8baac-d51a-42f4-9444-c8e4172be134-scripts\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.290190 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7fc8baac-d51a-42f4-9444-c8e4172be134-config-data\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.290494 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7fc8baac-d51a-42f4-9444-c8e4172be134-scripts\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.290852 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghq7b\" (UniqueName: \"kubernetes.io/projected/7fc8baac-d51a-42f4-9444-c8e4172be134-kube-api-access-ghq7b\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.291083 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-horizon-secret-key\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.291114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc8baac-d51a-42f4-9444-c8e4172be134-logs\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.291205 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-combined-ca-bundle\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.292138 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-horizon-tls-certs\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.303895 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fc8baac-d51a-42f4-9444-c8e4172be134-logs\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.304331 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-combined-ca-bundle\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.316369 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7fc8baac-d51a-42f4-9444-c8e4172be134-horizon-secret-key\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.325310 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghq7b\" (UniqueName: \"kubernetes.io/projected/7fc8baac-d51a-42f4-9444-c8e4172be134-kube-api-access-ghq7b\") pod \"horizon-695bd477bb-gcrxw\" (UID: \"7fc8baac-d51a-42f4-9444-c8e4172be134\") " pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.361682 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.583214 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-log" containerID="cri-o://f995a0e7d0d9962c047e55fbc84b15ddfb14b8f413515b2a9bfea922ff636345" gracePeriod=30 Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.584366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"421da29a-22fd-4c17-afb3-ef6e29117a19","Type":"ContainerStarted","Data":"2273d7982865945e6cf313b070d70002d565bb71ae3132ce81aef1e9c86c9785"} Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.585856 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-httpd" containerID="cri-o://2273d7982865945e6cf313b070d70002d565bb71ae3132ce81aef1e9c86c9785" gracePeriod=30 Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.663466 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.663426533 podStartE2EDuration="6.663426533s" podCreationTimestamp="2025-11-21 15:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:39.641956643 +0000 UTC m=+5313.763489195" watchObservedRunningTime="2025-11-21 15:00:39.663426533 +0000 UTC m=+5313.784959085" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.921150 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.934616 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.958314 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptk2q\" (UniqueName: \"kubernetes.io/projected/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-kube-api-access-ptk2q\") pod \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.958471 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzz74\" (UniqueName: \"kubernetes.io/projected/92872ce6-2457-4262-9960-71617702e973-kube-api-access-pzz74\") pod \"92872ce6-2457-4262-9960-71617702e973\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.958542 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92872ce6-2457-4262-9960-71617702e973-operator-scripts\") pod \"92872ce6-2457-4262-9960-71617702e973\" (UID: \"92872ce6-2457-4262-9960-71617702e973\") " Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.958608 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-operator-scripts\") pod \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\" (UID: \"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5\") " Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.976564 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" (UID: "594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.979062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92872ce6-2457-4262-9960-71617702e973-kube-api-access-pzz74" (OuterVolumeSpecName: "kube-api-access-pzz74") pod "92872ce6-2457-4262-9960-71617702e973" (UID: "92872ce6-2457-4262-9960-71617702e973"). InnerVolumeSpecName "kube-api-access-pzz74". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.982381 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92872ce6-2457-4262-9960-71617702e973-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92872ce6-2457-4262-9960-71617702e973" (UID: "92872ce6-2457-4262-9960-71617702e973"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:00:39 crc kubenswrapper[4904]: I1121 15:00:39.984854 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-kube-api-access-ptk2q" (OuterVolumeSpecName: "kube-api-access-ptk2q") pod "594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" (UID: "594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5"). InnerVolumeSpecName "kube-api-access-ptk2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.066004 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptk2q\" (UniqueName: \"kubernetes.io/projected/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-kube-api-access-ptk2q\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.066045 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzz74\" (UniqueName: \"kubernetes.io/projected/92872ce6-2457-4262-9960-71617702e973-kube-api-access-pzz74\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.066060 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92872ce6-2457-4262-9960-71617702e973-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.066071 4904 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: E1121 15:00:40.073315 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod421da29a_22fd_4c17_afb3_ef6e29117a19.slice/crio-2273d7982865945e6cf313b070d70002d565bb71ae3132ce81aef1e9c86c9785.scope\": RecentStats: unable to find data in memory cache]" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.085371 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-648f66fff6-bp2z7"] Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.334501 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-695bd477bb-gcrxw"] Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.610835 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hjx5b" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.611082 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hjx5b" event={"ID":"594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5","Type":"ContainerDied","Data":"5b6d5a939bea6fea1c78c8b20cc039c10c0d7d7b4c40fce28de502f5bcfebb04"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.611714 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6d5a939bea6fea1c78c8b20cc039c10c0d7d7b4c40fce28de502f5bcfebb04" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.619861 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-d760-account-create-z7s4m" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.619935 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-d760-account-create-z7s4m" event={"ID":"92872ce6-2457-4262-9960-71617702e973","Type":"ContainerDied","Data":"fe1e94e87d058589a28f50388ad3e3dcc5ef98a5a451f040a7859cd15d749c27"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.619998 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1e94e87d058589a28f50388ad3e3dcc5ef98a5a451f040a7859cd15d749c27" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.624013 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-648f66fff6-bp2z7" event={"ID":"95cd8e52-9048-40c0-8f62-b36f39433908","Type":"ContainerStarted","Data":"303d1a5169b64d2feb34450fc31456a99d81a964b0707806b0ad57735ce3bf66"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.630647 4904 generic.go:334] "Generic (PLEG): container finished" podID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerID="2273d7982865945e6cf313b070d70002d565bb71ae3132ce81aef1e9c86c9785" exitCode=143 Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.630694 4904 generic.go:334] "Generic (PLEG): container finished" podID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerID="f995a0e7d0d9962c047e55fbc84b15ddfb14b8f413515b2a9bfea922ff636345" exitCode=143 Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.630761 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"421da29a-22fd-4c17-afb3-ef6e29117a19","Type":"ContainerDied","Data":"2273d7982865945e6cf313b070d70002d565bb71ae3132ce81aef1e9c86c9785"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.630798 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"421da29a-22fd-4c17-afb3-ef6e29117a19","Type":"ContainerDied","Data":"f995a0e7d0d9962c047e55fbc84b15ddfb14b8f413515b2a9bfea922ff636345"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.639988 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1539aa29-c6fa-433e-8dae-068a77a07fce","Type":"ContainerStarted","Data":"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.640101 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-log" containerID="cri-o://af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084" gracePeriod=30 Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.640270 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-httpd" containerID="cri-o://cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532" gracePeriod=30 Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.646507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695bd477bb-gcrxw" event={"ID":"7fc8baac-d51a-42f4-9444-c8e4172be134","Type":"ContainerStarted","Data":"c15857907eea9300eda91f2537ce6ae6100af862e65424596065c01e276093cb"} Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.703135 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.703112304 podStartE2EDuration="7.703112304s" podCreationTimestamp="2025-11-21 15:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:40.670796881 +0000 UTC m=+5314.792329443" watchObservedRunningTime="2025-11-21 15:00:40.703112304 +0000 UTC m=+5314.824644856" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.724398 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795292 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-config-data\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795355 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-internal-tls-certs\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795388 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-scripts\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795456 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-httpd-run\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795528 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-combined-ca-bundle\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795615 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr265\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-kube-api-access-vr265\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795638 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-ceph\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795689 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-logs\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.795707 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"421da29a-22fd-4c17-afb3-ef6e29117a19\" (UID: \"421da29a-22fd-4c17-afb3-ef6e29117a19\") " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.797587 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-logs" (OuterVolumeSpecName: "logs") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.798244 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.811728 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-ceph" (OuterVolumeSpecName: "ceph") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.814837 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-scripts" (OuterVolumeSpecName: "scripts") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.816062 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.818685 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-kube-api-access-vr265" (OuterVolumeSpecName: "kube-api-access-vr265") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "kube-api-access-vr265". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.860026 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898017 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr265\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-kube-api-access-vr265\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898052 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/421da29a-22fd-4c17-afb3-ef6e29117a19-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898064 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-logs\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898095 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898104 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898112 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/421da29a-22fd-4c17-afb3-ef6e29117a19-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.898119 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.907918 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.918434 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-config-data" (OuterVolumeSpecName: "config-data") pod "421da29a-22fd-4c17-afb3-ef6e29117a19" (UID: "421da29a-22fd-4c17-afb3-ef6e29117a19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:40 crc kubenswrapper[4904]: I1121 15:00:40.927062 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.001273 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.001315 4904 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/421da29a-22fd-4c17-afb3-ef6e29117a19-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.001328 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.673260 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.675648 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"421da29a-22fd-4c17-afb3-ef6e29117a19","Type":"ContainerDied","Data":"d122f919c0c8aba0dfd2622614cf203a27282afd8a627b1eaf5f7e8a878d2c5b"} Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.675846 4904 scope.go:117] "RemoveContainer" containerID="2273d7982865945e6cf313b070d70002d565bb71ae3132ce81aef1e9c86c9785" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.675989 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.681901 4904 generic.go:334] "Generic (PLEG): container finished" podID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerID="cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532" exitCode=0 Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.682057 4904 generic.go:334] "Generic (PLEG): container finished" podID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerID="af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084" exitCode=143 Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.682137 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1539aa29-c6fa-433e-8dae-068a77a07fce","Type":"ContainerDied","Data":"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532"} Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.682217 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1539aa29-c6fa-433e-8dae-068a77a07fce","Type":"ContainerDied","Data":"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084"} Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.682287 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1539aa29-c6fa-433e-8dae-068a77a07fce","Type":"ContainerDied","Data":"63abec01076acc02ce27c8e0b78a1245656e901e7811799b1390e4af6ca2b418"} Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.682415 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.742434 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-public-tls-certs\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743071 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743146 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-config-data\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743175 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8frv\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-kube-api-access-z8frv\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743334 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-ceph\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743389 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-httpd-run\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743409 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-combined-ca-bundle\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743433 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-scripts\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.743448 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-logs\") pod \"1539aa29-c6fa-433e-8dae-068a77a07fce\" (UID: \"1539aa29-c6fa-433e-8dae-068a77a07fce\") " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.746979 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-logs" (OuterVolumeSpecName: "logs") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.747552 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.759322 4904 scope.go:117] "RemoveContainer" containerID="f995a0e7d0d9962c047e55fbc84b15ddfb14b8f413515b2a9bfea922ff636345" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.782828 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.786101 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-kube-api-access-z8frv" (OuterVolumeSpecName: "kube-api-access-z8frv") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "kube-api-access-z8frv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.799505 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-scripts" (OuterVolumeSpecName: "scripts") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.815199 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-ceph" (OuterVolumeSpecName: "ceph") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.865186 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.865232 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8frv\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-kube-api-access-z8frv\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.865246 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1539aa29-c6fa-433e-8dae-068a77a07fce-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.865258 4904 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.865266 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.865274 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1539aa29-c6fa-433e-8dae-068a77a07fce-logs\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.866491 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.875237 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.897892 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.914724 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.921309 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:41 crc kubenswrapper[4904]: E1121 15:00:41.921944 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-httpd" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.921958 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-httpd" Nov 21 15:00:41 crc kubenswrapper[4904]: E1121 15:00:41.921984 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-httpd" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.921991 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-httpd" Nov 21 15:00:41 crc kubenswrapper[4904]: E1121 15:00:41.922020 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" containerName="mariadb-database-create" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922028 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" containerName="mariadb-database-create" Nov 21 15:00:41 crc kubenswrapper[4904]: E1121 15:00:41.922036 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92872ce6-2457-4262-9960-71617702e973" containerName="mariadb-account-create" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922042 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="92872ce6-2457-4262-9960-71617702e973" containerName="mariadb-account-create" Nov 21 15:00:41 crc kubenswrapper[4904]: E1121 15:00:41.922061 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-log" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922068 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-log" Nov 21 15:00:41 crc kubenswrapper[4904]: E1121 15:00:41.922081 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-log" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922088 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-log" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922295 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" containerName="mariadb-database-create" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922310 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="92872ce6-2457-4262-9960-71617702e973" containerName="mariadb-account-create" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922324 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-httpd" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922341 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-httpd" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922356 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" containerName="glance-log" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.922366 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" containerName="glance-log" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.938711 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-config-data" (OuterVolumeSpecName: "config-data") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.945844 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.950977 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.951395 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.976920 4904 scope.go:117] "RemoveContainer" containerID="cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532" Nov 21 15:00:41 crc kubenswrapper[4904]: I1121 15:00:41.993252 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1539aa29-c6fa-433e-8dae-068a77a07fce" (UID: "1539aa29-c6fa-433e-8dae-068a77a07fce"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.023745 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.023789 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.023809 4904 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1539aa29-c6fa-433e-8dae-068a77a07fce-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.023820 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.040960 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.091985 4904 scope.go:117] "RemoveContainer" containerID="af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.128887 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.128970 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129086 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129112 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129156 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmwlr\" (UniqueName: \"kubernetes.io/projected/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-kube-api-access-gmwlr\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129235 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129353 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129430 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.129470 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.169382 4904 scope.go:117] "RemoveContainer" containerID="cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532" Nov 21 15:00:42 crc kubenswrapper[4904]: E1121 15:00:42.170478 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532\": container with ID starting with cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532 not found: ID does not exist" containerID="cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.170530 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532"} err="failed to get container status \"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532\": rpc error: code = NotFound desc = could not find container \"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532\": container with ID starting with cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532 not found: ID does not exist" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.170564 4904 scope.go:117] "RemoveContainer" containerID="af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084" Nov 21 15:00:42 crc kubenswrapper[4904]: E1121 15:00:42.171042 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084\": container with ID starting with af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084 not found: ID does not exist" containerID="af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.171083 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084"} err="failed to get container status \"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084\": rpc error: code = NotFound desc = could not find container \"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084\": container with ID starting with af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084 not found: ID does not exist" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.171103 4904 scope.go:117] "RemoveContainer" containerID="cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.171463 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532"} err="failed to get container status \"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532\": rpc error: code = NotFound desc = could not find container \"cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532\": container with ID starting with cc156eb2b805925bf8f4d4990983564d5ce5081a9db3e0d5dd870e92ba731532 not found: ID does not exist" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.171489 4904 scope.go:117] "RemoveContainer" containerID="af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.172587 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084"} err="failed to get container status \"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084\": rpc error: code = NotFound desc = could not find container \"af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084\": container with ID starting with af862ffed579a4a51f69b99c64283533d180eb13358b27b1a9ce223a9881e084 not found: ID does not exist" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236092 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236233 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236265 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236324 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmwlr\" (UniqueName: \"kubernetes.io/projected/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-kube-api-access-gmwlr\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236351 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236399 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236464 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236503 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.236685 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.237209 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.245551 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.246338 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.246366 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-logs\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.252769 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.254534 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.256716 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.259480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-ceph\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.268535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmwlr\" (UniqueName: \"kubernetes.io/projected/e079f5ff-96a7-417b-bc95-a4578fe3a4ec-kube-api-access-gmwlr\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.304914 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"e079f5ff-96a7-417b-bc95-a4578fe3a4ec\") " pod="openstack/glance-default-internal-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.344605 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.377267 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.413076 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.415207 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.417134 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.419864 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.446137 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.450944 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d8cf362a-fd51-4493-967e-aa0462ce4007-ceph\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.451368 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.451917 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.452332 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhb68\" (UniqueName: \"kubernetes.io/projected/d8cf362a-fd51-4493-967e-aa0462ce4007-kube-api-access-rhb68\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.452520 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cf362a-fd51-4493-967e-aa0462ce4007-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.453692 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.453950 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.454204 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cf362a-fd51-4493-967e-aa0462ce4007-logs\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.469232 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.533262 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1539aa29-c6fa-433e-8dae-068a77a07fce" path="/var/lib/kubelet/pods/1539aa29-c6fa-433e-8dae-068a77a07fce/volumes" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.534763 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421da29a-22fd-4c17-afb3-ef6e29117a19" path="/var/lib/kubelet/pods/421da29a-22fd-4c17-afb3-ef6e29117a19/volumes" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.559333 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.559421 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d8cf362a-fd51-4493-967e-aa0462ce4007-ceph\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.559475 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.559599 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.559733 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhb68\" (UniqueName: \"kubernetes.io/projected/d8cf362a-fd51-4493-967e-aa0462ce4007-kube-api-access-rhb68\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.559781 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cf362a-fd51-4493-967e-aa0462ce4007-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.561510 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.570284 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.570398 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.570532 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cf362a-fd51-4493-967e-aa0462ce4007-logs\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.571647 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8cf362a-fd51-4493-967e-aa0462ce4007-logs\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.573181 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d8cf362a-fd51-4493-967e-aa0462ce4007-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:42 crc kubenswrapper[4904]: I1121 15:00:42.610577 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.352620 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d8cf362a-fd51-4493-967e-aa0462ce4007-ceph\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.352919 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.353390 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-scripts\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.355683 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.374396 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhb68\" (UniqueName: \"kubernetes.io/projected/d8cf362a-fd51-4493-967e-aa0462ce4007-kube-api-access-rhb68\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.379862 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8cf362a-fd51-4493-967e-aa0462ce4007-config-data\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.450771 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"d8cf362a-fd51-4493-967e-aa0462ce4007\") " pod="openstack/glance-default-external-api-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.478230 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:43 crc kubenswrapper[4904]: I1121 15:00:43.492545 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:43.699334 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:43.805045 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:43.926414 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="186f682e-35c9-47ac-8e62-264769272a1b" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.185482 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-hk6xj"] Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.188059 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.196792 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hk6xj"] Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.196951 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-497mv" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.197239 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.280591 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-config-data\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.280751 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-combined-ca-bundle\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.280854 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-job-config-data\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.281030 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psgf7\" (UniqueName: \"kubernetes.io/projected/21abe39c-baf1-4382-b041-ec03d59a99b5-kube-api-access-psgf7\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.384888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-combined-ca-bundle\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.384960 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-job-config-data\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.385013 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psgf7\" (UniqueName: \"kubernetes.io/projected/21abe39c-baf1-4382-b041-ec03d59a99b5-kube-api-access-psgf7\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:45 crc kubenswrapper[4904]: I1121 15:00:45.385135 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-config-data\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.052170 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-config-data\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.052956 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-job-config-data\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.053534 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-combined-ca-bundle\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.053729 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psgf7\" (UniqueName: \"kubernetes.io/projected/21abe39c-baf1-4382-b041-ec03d59a99b5-kube-api-access-psgf7\") pod \"manila-db-sync-hk6xj\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.096254 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.115521 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hk6xj" Nov 21 15:00:46 crc kubenswrapper[4904]: I1121 15:00:46.307455 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 21 15:00:48 crc kubenswrapper[4904]: I1121 15:00:48.489535 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Nov 21 15:00:52 crc kubenswrapper[4904]: W1121 15:00:52.192715 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode079f5ff_96a7_417b_bc95_a4578fe3a4ec.slice/crio-8e4173e21901273dfc69bad847ce062d5edf46328b9e41d6c9c04e9706eb1d9a WatchSource:0}: Error finding container 8e4173e21901273dfc69bad847ce062d5edf46328b9e41d6c9c04e9706eb1d9a: Status 404 returned error can't find the container with id 8e4173e21901273dfc69bad847ce062d5edf46328b9e41d6c9c04e9706eb1d9a Nov 21 15:00:52 crc kubenswrapper[4904]: W1121 15:00:52.212027 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8cf362a_fd51_4493_967e_aa0462ce4007.slice/crio-de4e148922c9674c66087c117fd3a88f49a6c83b468dee019b9054c026f21b3a WatchSource:0}: Error finding container de4e148922c9674c66087c117fd3a88f49a6c83b468dee019b9054c026f21b3a: Status 404 returned error can't find the container with id de4e148922c9674c66087c117fd3a88f49a6c83b468dee019b9054c026f21b3a Nov 21 15:00:52 crc kubenswrapper[4904]: I1121 15:00:52.890620 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-hk6xj"] Nov 21 15:00:52 crc kubenswrapper[4904]: I1121 15:00:52.905868 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cf362a-fd51-4493-967e-aa0462ce4007","Type":"ContainerStarted","Data":"de4e148922c9674c66087c117fd3a88f49a6c83b468dee019b9054c026f21b3a"} Nov 21 15:00:52 crc kubenswrapper[4904]: I1121 15:00:52.916233 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695bd477bb-gcrxw" event={"ID":"7fc8baac-d51a-42f4-9444-c8e4172be134","Type":"ContainerStarted","Data":"0a14f49663add1ff49e07a96ba2a13381fa645a76a83719dae54ba3f4f904c31"} Nov 21 15:00:52 crc kubenswrapper[4904]: I1121 15:00:52.923977 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e079f5ff-96a7-417b-bc95-a4578fe3a4ec","Type":"ContainerStarted","Data":"8e4173e21901273dfc69bad847ce062d5edf46328b9e41d6c9c04e9706eb1d9a"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.943504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cf362a-fd51-4493-967e-aa0462ce4007","Type":"ContainerStarted","Data":"87d5edd654a77e4ab44ef2d7a5e2001745c006785e2360316d0c4194c43c5728"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.947185 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-648f66fff6-bp2z7" event={"ID":"95cd8e52-9048-40c0-8f62-b36f39433908","Type":"ContainerStarted","Data":"11ceaf74ebc5a58e4025282426be09a7c7c72ef4a5297ca34829896518d140fa"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.947258 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-648f66fff6-bp2z7" event={"ID":"95cd8e52-9048-40c0-8f62-b36f39433908","Type":"ContainerStarted","Data":"fc01bcb4218b7b9864118de827a26def8b52df4c425fa708adcd7dd3befa74dd"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.951756 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-695bd477bb-gcrxw" event={"ID":"7fc8baac-d51a-42f4-9444-c8e4172be134","Type":"ContainerStarted","Data":"ce3b577301c99be7598730a9feaa04eb36ed7339c8b93de9c87bf7845c7935d8"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.955622 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5768cf7b89-vmm4n" event={"ID":"61534ca3-fad5-4d18-80cf-0331ec454e86","Type":"ContainerStarted","Data":"526b365148baa9681efc342f8d4c2d5626f37267b829536580af4e0abb372097"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.955710 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5768cf7b89-vmm4n" event={"ID":"61534ca3-fad5-4d18-80cf-0331ec454e86","Type":"ContainerStarted","Data":"1947a59e53af66a1e0265c9db38ddb6282596e889511062ac26cb2f5ee158d1f"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.955900 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5768cf7b89-vmm4n" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon-log" containerID="cri-o://1947a59e53af66a1e0265c9db38ddb6282596e889511062ac26cb2f5ee158d1f" gracePeriod=30 Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.956054 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5768cf7b89-vmm4n" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon" containerID="cri-o://526b365148baa9681efc342f8d4c2d5626f37267b829536580af4e0abb372097" gracePeriod=30 Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.960192 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hk6xj" event={"ID":"21abe39c-baf1-4382-b041-ec03d59a99b5","Type":"ContainerStarted","Data":"079d0cd950d4ea5d75ba0cc3e533ad82f48fc01e2b88ea9568a48d1b4ab296b3"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.962874 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e079f5ff-96a7-417b-bc95-a4578fe3a4ec","Type":"ContainerStarted","Data":"a391d96d4d723f65ce420528a15bfd5e990d2c008a81401127cfc07c69d7c8d5"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.969467 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88ff5bcc-k5zxp" event={"ID":"b824537a-619a-48a7-bf66-4e9639582be0","Type":"ContainerStarted","Data":"730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.969530 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88ff5bcc-k5zxp" event={"ID":"b824537a-619a-48a7-bf66-4e9639582be0","Type":"ContainerStarted","Data":"ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77"} Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.969751 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c88ff5bcc-k5zxp" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon-log" containerID="cri-o://ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77" gracePeriod=30 Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.969843 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-c88ff5bcc-k5zxp" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon" containerID="cri-o://730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4" gracePeriod=30 Nov 21 15:00:53 crc kubenswrapper[4904]: I1121 15:00:53.985692 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-648f66fff6-bp2z7" podStartSLOduration=3.535838026 podStartE2EDuration="15.985646197s" podCreationTimestamp="2025-11-21 15:00:38 +0000 UTC" firstStartedPulling="2025-11-21 15:00:40.098378599 +0000 UTC m=+5314.219911151" lastFinishedPulling="2025-11-21 15:00:52.54818677 +0000 UTC m=+5326.669719322" observedRunningTime="2025-11-21 15:00:53.97296984 +0000 UTC m=+5328.094502392" watchObservedRunningTime="2025-11-21 15:00:53.985646197 +0000 UTC m=+5328.107178749" Nov 21 15:00:54 crc kubenswrapper[4904]: I1121 15:00:54.008047 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c88ff5bcc-k5zxp" podStartSLOduration=4.124551992 podStartE2EDuration="20.008018579s" podCreationTimestamp="2025-11-21 15:00:34 +0000 UTC" firstStartedPulling="2025-11-21 15:00:36.528024875 +0000 UTC m=+5310.649557417" lastFinishedPulling="2025-11-21 15:00:52.411491462 +0000 UTC m=+5326.533024004" observedRunningTime="2025-11-21 15:00:53.997635058 +0000 UTC m=+5328.119167620" watchObservedRunningTime="2025-11-21 15:00:54.008018579 +0000 UTC m=+5328.129551131" Nov 21 15:00:54 crc kubenswrapper[4904]: I1121 15:00:54.021095 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-695bd477bb-gcrxw" podStartSLOduration=3.973231992 podStartE2EDuration="16.021065314s" podCreationTimestamp="2025-11-21 15:00:38 +0000 UTC" firstStartedPulling="2025-11-21 15:00:40.356219259 +0000 UTC m=+5314.477751811" lastFinishedPulling="2025-11-21 15:00:52.404052581 +0000 UTC m=+5326.525585133" observedRunningTime="2025-11-21 15:00:54.018637926 +0000 UTC m=+5328.140170478" watchObservedRunningTime="2025-11-21 15:00:54.021065314 +0000 UTC m=+5328.142597866" Nov 21 15:00:54 crc kubenswrapper[4904]: I1121 15:00:54.050554 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5768cf7b89-vmm4n" podStartSLOduration=3.254008083 podStartE2EDuration="19.050522027s" podCreationTimestamp="2025-11-21 15:00:35 +0000 UTC" firstStartedPulling="2025-11-21 15:00:36.613192035 +0000 UTC m=+5310.734724577" lastFinishedPulling="2025-11-21 15:00:52.409705969 +0000 UTC m=+5326.531238521" observedRunningTime="2025-11-21 15:00:54.04112829 +0000 UTC m=+5328.162660852" watchObservedRunningTime="2025-11-21 15:00:54.050522027 +0000 UTC m=+5328.172054589" Nov 21 15:00:54 crc kubenswrapper[4904]: I1121 15:00:54.986065 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e079f5ff-96a7-417b-bc95-a4578fe3a4ec","Type":"ContainerStarted","Data":"382e6523fe4516b9fb4237eb2c80843d0f851656fa49dbe73f2a7a60ff4dabaf"} Nov 21 15:00:54 crc kubenswrapper[4904]: I1121 15:00:54.989759 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d8cf362a-fd51-4493-967e-aa0462ce4007","Type":"ContainerStarted","Data":"de4f39a4e78ff23bb5bb3056ff5b7af7f8922c17a79b0d0aeff5fb952f27e24c"} Nov 21 15:00:55 crc kubenswrapper[4904]: I1121 15:00:55.024362 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.024344014 podStartE2EDuration="14.024344014s" podCreationTimestamp="2025-11-21 15:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:55.00719636 +0000 UTC m=+5329.128728912" watchObservedRunningTime="2025-11-21 15:00:55.024344014 +0000 UTC m=+5329.145876566" Nov 21 15:00:55 crc kubenswrapper[4904]: I1121 15:00:55.048140 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.04811012 podStartE2EDuration="13.04811012s" podCreationTimestamp="2025-11-21 15:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:00:55.039187304 +0000 UTC m=+5329.160719876" watchObservedRunningTime="2025-11-21 15:00:55.04811012 +0000 UTC m=+5329.169642672" Nov 21 15:00:55 crc kubenswrapper[4904]: I1121 15:00:55.324021 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:00:55 crc kubenswrapper[4904]: I1121 15:00:55.534338 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:00:58 crc kubenswrapper[4904]: I1121 15:00:58.989258 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m7svb"] Nov 21 15:00:58 crc kubenswrapper[4904]: I1121 15:00:58.993242 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.042436 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7svb"] Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.062494 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzkq\" (UniqueName: \"kubernetes.io/projected/f48c1752-9d41-4564-bd95-577df0c3d210-kube-api-access-jqzkq\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.063005 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-utilities\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.063417 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-catalog-content\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.147025 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.147505 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.165852 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzkq\" (UniqueName: \"kubernetes.io/projected/f48c1752-9d41-4564-bd95-577df0c3d210-kube-api-access-jqzkq\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.165986 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-utilities\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.166125 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-catalog-content\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.166766 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-catalog-content\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.167211 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-utilities\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.262843 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzkq\" (UniqueName: \"kubernetes.io/projected/f48c1752-9d41-4564-bd95-577df0c3d210-kube-api-access-jqzkq\") pod \"certified-operators-m7svb\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.346531 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.363315 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:00:59 crc kubenswrapper[4904]: I1121 15:00:59.363757 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.160272 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395621-lkq6r"] Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.162015 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.175361 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395621-lkq6r"] Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.300884 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-config-data\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.301036 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-fernet-keys\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.301085 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-combined-ca-bundle\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.301140 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94cqn\" (UniqueName: \"kubernetes.io/projected/b6002e56-ad4f-43e5-928f-602c88ed887d-kube-api-access-94cqn\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.403340 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-config-data\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.403847 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-fernet-keys\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.403900 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-combined-ca-bundle\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.403942 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94cqn\" (UniqueName: \"kubernetes.io/projected/b6002e56-ad4f-43e5-928f-602c88ed887d-kube-api-access-94cqn\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.418175 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-fernet-keys\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.418395 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-config-data\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.426167 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94cqn\" (UniqueName: \"kubernetes.io/projected/b6002e56-ad4f-43e5-928f-602c88ed887d-kube-api-access-94cqn\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.427216 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-combined-ca-bundle\") pod \"keystone-cron-29395621-lkq6r\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:00 crc kubenswrapper[4904]: I1121 15:01:00.511037 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:01 crc kubenswrapper[4904]: I1121 15:01:01.128874 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m7svb"] Nov 21 15:01:01 crc kubenswrapper[4904]: I1121 15:01:01.276624 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395621-lkq6r"] Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.117852 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hk6xj" event={"ID":"21abe39c-baf1-4382-b041-ec03d59a99b5","Type":"ContainerStarted","Data":"ea84e43aeed9826334bac83e3ab307b54c70c718736ac8ec53510655f05e250d"} Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.124064 4904 generic.go:334] "Generic (PLEG): container finished" podID="f48c1752-9d41-4564-bd95-577df0c3d210" containerID="b481498cf884f0e7621125971cc9505c4d79db8acf6a9543bd7f42556b17e5cf" exitCode=0 Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.124158 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerDied","Data":"b481498cf884f0e7621125971cc9505c4d79db8acf6a9543bd7f42556b17e5cf"} Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.124460 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerStarted","Data":"d832504ad61dda23ee7ded73a6a0df599a08698793accadf63414a909a98318a"} Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.147668 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-lkq6r" event={"ID":"b6002e56-ad4f-43e5-928f-602c88ed887d","Type":"ContainerStarted","Data":"f102b63abfa773a7d15c4045a6eb7f08eb07832c0f1620e99b271adff2854990"} Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.147778 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-lkq6r" event={"ID":"b6002e56-ad4f-43e5-928f-602c88ed887d","Type":"ContainerStarted","Data":"704b2fbb5edf0cfae102afbd7eada3f16e67c1ddd98fbac75583d2e328520544"} Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.149470 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-hk6xj" podStartSLOduration=9.514305051000001 podStartE2EDuration="17.149442625s" podCreationTimestamp="2025-11-21 15:00:45 +0000 UTC" firstStartedPulling="2025-11-21 15:00:52.907176928 +0000 UTC m=+5327.028709480" lastFinishedPulling="2025-11-21 15:01:00.542314502 +0000 UTC m=+5334.663847054" observedRunningTime="2025-11-21 15:01:02.141750919 +0000 UTC m=+5336.263283471" watchObservedRunningTime="2025-11-21 15:01:02.149442625 +0000 UTC m=+5336.270975177" Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.204690 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395621-lkq6r" podStartSLOduration=2.204662481 podStartE2EDuration="2.204662481s" podCreationTimestamp="2025-11-21 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:02.188579873 +0000 UTC m=+5336.310112445" watchObservedRunningTime="2025-11-21 15:01:02.204662481 +0000 UTC m=+5336.326195033" Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.612167 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.612611 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.659108 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:02 crc kubenswrapper[4904]: I1121 15:01:02.686101 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:03 crc kubenswrapper[4904]: I1121 15:01:03.158100 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:03 crc kubenswrapper[4904]: I1121 15:01:03.158163 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:03 crc kubenswrapper[4904]: I1121 15:01:03.700175 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 15:01:03 crc kubenswrapper[4904]: I1121 15:01:03.700637 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 21 15:01:03 crc kubenswrapper[4904]: I1121 15:01:03.746014 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 15:01:03 crc kubenswrapper[4904]: I1121 15:01:03.781307 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 21 15:01:04 crc kubenswrapper[4904]: I1121 15:01:04.167185 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 15:01:04 crc kubenswrapper[4904]: I1121 15:01:04.167468 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 21 15:01:07 crc kubenswrapper[4904]: I1121 15:01:07.249327 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-lkq6r" event={"ID":"b6002e56-ad4f-43e5-928f-602c88ed887d","Type":"ContainerDied","Data":"f102b63abfa773a7d15c4045a6eb7f08eb07832c0f1620e99b271adff2854990"} Nov 21 15:01:07 crc kubenswrapper[4904]: I1121 15:01:07.249725 4904 generic.go:334] "Generic (PLEG): container finished" podID="b6002e56-ad4f-43e5-928f-602c88ed887d" containerID="f102b63abfa773a7d15c4045a6eb7f08eb07832c0f1620e99b271adff2854990" exitCode=0 Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.132983 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.133441 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.139239 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.139375 4904 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.176846 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.216623 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 21 15:01:08 crc kubenswrapper[4904]: I1121 15:01:08.289585 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerStarted","Data":"6aabe156e939f06c9a4c58f3d05676c449d7063a3ae53c294b8727b44199e19a"} Nov 21 15:01:09 crc kubenswrapper[4904]: I1121 15:01:09.154386 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.75:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.75:8443: connect: connection refused" Nov 21 15:01:09 crc kubenswrapper[4904]: I1121 15:01:09.365084 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-695bd477bb-gcrxw" podUID="7fc8baac-d51a-42f4-9444-c8e4172be134" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.76:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.76:8443: connect: connection refused" Nov 21 15:01:09 crc kubenswrapper[4904]: I1121 15:01:09.844698 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.023361 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-combined-ca-bundle\") pod \"b6002e56-ad4f-43e5-928f-602c88ed887d\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.024165 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-config-data\") pod \"b6002e56-ad4f-43e5-928f-602c88ed887d\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.024417 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94cqn\" (UniqueName: \"kubernetes.io/projected/b6002e56-ad4f-43e5-928f-602c88ed887d-kube-api-access-94cqn\") pod \"b6002e56-ad4f-43e5-928f-602c88ed887d\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.024554 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-fernet-keys\") pod \"b6002e56-ad4f-43e5-928f-602c88ed887d\" (UID: \"b6002e56-ad4f-43e5-928f-602c88ed887d\") " Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.043080 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6002e56-ad4f-43e5-928f-602c88ed887d-kube-api-access-94cqn" (OuterVolumeSpecName: "kube-api-access-94cqn") pod "b6002e56-ad4f-43e5-928f-602c88ed887d" (UID: "b6002e56-ad4f-43e5-928f-602c88ed887d"). InnerVolumeSpecName "kube-api-access-94cqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.050270 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b6002e56-ad4f-43e5-928f-602c88ed887d" (UID: "b6002e56-ad4f-43e5-928f-602c88ed887d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.099677 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6002e56-ad4f-43e5-928f-602c88ed887d" (UID: "b6002e56-ad4f-43e5-928f-602c88ed887d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.128795 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94cqn\" (UniqueName: \"kubernetes.io/projected/b6002e56-ad4f-43e5-928f-602c88ed887d-kube-api-access-94cqn\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.128830 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.128844 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.167442 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-config-data" (OuterVolumeSpecName: "config-data") pod "b6002e56-ad4f-43e5-928f-602c88ed887d" (UID: "b6002e56-ad4f-43e5-928f-602c88ed887d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.236800 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6002e56-ad4f-43e5-928f-602c88ed887d-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.322604 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395621-lkq6r" event={"ID":"b6002e56-ad4f-43e5-928f-602c88ed887d","Type":"ContainerDied","Data":"704b2fbb5edf0cfae102afbd7eada3f16e67c1ddd98fbac75583d2e328520544"} Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.322781 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704b2fbb5edf0cfae102afbd7eada3f16e67c1ddd98fbac75583d2e328520544" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.322647 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395621-lkq6r" Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.326234 4904 generic.go:334] "Generic (PLEG): container finished" podID="f48c1752-9d41-4564-bd95-577df0c3d210" containerID="6aabe156e939f06c9a4c58f3d05676c449d7063a3ae53c294b8727b44199e19a" exitCode=0 Nov 21 15:01:10 crc kubenswrapper[4904]: I1121 15:01:10.326323 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerDied","Data":"6aabe156e939f06c9a4c58f3d05676c449d7063a3ae53c294b8727b44199e19a"} Nov 21 15:01:11 crc kubenswrapper[4904]: I1121 15:01:11.340577 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerStarted","Data":"d4a7733016d514f6f16a3b974c3441ca10f477764cc847cad61d8b499e5821a2"} Nov 21 15:01:11 crc kubenswrapper[4904]: I1121 15:01:11.378678 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m7svb" podStartSLOduration=4.664490559 podStartE2EDuration="13.378638156s" podCreationTimestamp="2025-11-21 15:00:58 +0000 UTC" firstStartedPulling="2025-11-21 15:01:02.127148606 +0000 UTC m=+5336.248681158" lastFinishedPulling="2025-11-21 15:01:10.841296193 +0000 UTC m=+5344.962828755" observedRunningTime="2025-11-21 15:01:11.371505783 +0000 UTC m=+5345.493038345" watchObservedRunningTime="2025-11-21 15:01:11.378638156 +0000 UTC m=+5345.500170708" Nov 21 15:01:16 crc kubenswrapper[4904]: I1121 15:01:16.412521 4904 generic.go:334] "Generic (PLEG): container finished" podID="21abe39c-baf1-4382-b041-ec03d59a99b5" containerID="ea84e43aeed9826334bac83e3ab307b54c70c718736ac8ec53510655f05e250d" exitCode=0 Nov 21 15:01:16 crc kubenswrapper[4904]: I1121 15:01:16.413234 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hk6xj" event={"ID":"21abe39c-baf1-4382-b041-ec03d59a99b5","Type":"ContainerDied","Data":"ea84e43aeed9826334bac83e3ab307b54c70c718736ac8ec53510655f05e250d"} Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.176443 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hk6xj" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.252161 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psgf7\" (UniqueName: \"kubernetes.io/projected/21abe39c-baf1-4382-b041-ec03d59a99b5-kube-api-access-psgf7\") pod \"21abe39c-baf1-4382-b041-ec03d59a99b5\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.252271 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-combined-ca-bundle\") pod \"21abe39c-baf1-4382-b041-ec03d59a99b5\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.252630 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-job-config-data\") pod \"21abe39c-baf1-4382-b041-ec03d59a99b5\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.252733 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-config-data\") pod \"21abe39c-baf1-4382-b041-ec03d59a99b5\" (UID: \"21abe39c-baf1-4382-b041-ec03d59a99b5\") " Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.263930 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "21abe39c-baf1-4382-b041-ec03d59a99b5" (UID: "21abe39c-baf1-4382-b041-ec03d59a99b5"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.267974 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-config-data" (OuterVolumeSpecName: "config-data") pod "21abe39c-baf1-4382-b041-ec03d59a99b5" (UID: "21abe39c-baf1-4382-b041-ec03d59a99b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.273278 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21abe39c-baf1-4382-b041-ec03d59a99b5-kube-api-access-psgf7" (OuterVolumeSpecName: "kube-api-access-psgf7") pod "21abe39c-baf1-4382-b041-ec03d59a99b5" (UID: "21abe39c-baf1-4382-b041-ec03d59a99b5"). InnerVolumeSpecName "kube-api-access-psgf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.298349 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21abe39c-baf1-4382-b041-ec03d59a99b5" (UID: "21abe39c-baf1-4382-b041-ec03d59a99b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.356556 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.356612 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psgf7\" (UniqueName: \"kubernetes.io/projected/21abe39c-baf1-4382-b041-ec03d59a99b5-kube-api-access-psgf7\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.356625 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.356637 4904 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/21abe39c-baf1-4382-b041-ec03d59a99b5-job-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.453033 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-hk6xj" event={"ID":"21abe39c-baf1-4382-b041-ec03d59a99b5","Type":"ContainerDied","Data":"079d0cd950d4ea5d75ba0cc3e533ad82f48fc01e2b88ea9568a48d1b4ab296b3"} Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.453076 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079d0cd950d4ea5d75ba0cc3e533ad82f48fc01e2b88ea9568a48d1b4ab296b3" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.453139 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-hk6xj" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.861231 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:18 crc kubenswrapper[4904]: E1121 15:01:18.862070 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21abe39c-baf1-4382-b041-ec03d59a99b5" containerName="manila-db-sync" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.862091 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="21abe39c-baf1-4382-b041-ec03d59a99b5" containerName="manila-db-sync" Nov 21 15:01:18 crc kubenswrapper[4904]: E1121 15:01:18.862166 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6002e56-ad4f-43e5-928f-602c88ed887d" containerName="keystone-cron" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.862174 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6002e56-ad4f-43e5-928f-602c88ed887d" containerName="keystone-cron" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.862454 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="21abe39c-baf1-4382-b041-ec03d59a99b5" containerName="manila-db-sync" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.862480 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6002e56-ad4f-43e5-928f-602c88ed887d" containerName="keystone-cron" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.864205 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.871337 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.871589 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.871745 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.875099 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-497mv" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.893781 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.899579 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.911308 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.928776 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.963827 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979398 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979469 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979493 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979531 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-scripts\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979573 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979664 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-scripts\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979696 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtrk\" (UniqueName: \"kubernetes.io/projected/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-kube-api-access-cqtrk\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979725 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979798 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-ceph\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-266rm\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-kube-api-access-266rm\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979852 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:18 crc kubenswrapper[4904]: I1121 15:01:18.979874 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087230 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtrk\" (UniqueName: \"kubernetes.io/projected/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-kube-api-access-cqtrk\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087300 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087420 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-ceph\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087460 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-266rm\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-kube-api-access-266rm\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087511 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087535 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087607 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087628 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087683 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-scripts\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087711 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087743 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087799 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.087855 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-scripts\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.088505 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.091733 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.093805 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.105468 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-scripts\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.106480 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.107957 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-ceph\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.120221 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.120614 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-scripts\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.123601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.159829 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.160740 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.161340 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.171931 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.75:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.75:8443: connect: connection refused" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.174204 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-266rm\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-kube-api-access-266rm\") pod \"manila-share-share1-0\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.185333 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtrk\" (UniqueName: \"kubernetes.io/projected/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-kube-api-access-cqtrk\") pod \"manila-scheduler-0\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.194528 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.251303 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.311748 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5767ddb7c-mbpns"] Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.314542 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.344149 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.346256 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.350134 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.350163 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.350200 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.368314 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-695bd477bb-gcrxw" podUID="7fc8baac-d51a-42f4-9444-c8e4172be134" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.76:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.76:8443: connect: connection refused" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.391193 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415517 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-ovsdbserver-nb\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415574 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-etc-machine-id\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415637 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-logs\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415681 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-dns-swift-storage-0\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415701 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r5pm\" (UniqueName: \"kubernetes.io/projected/a141634c-9846-4d10-89f0-a5a28a50d016-kube-api-access-6r5pm\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415724 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data-custom\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415768 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415791 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-config\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415827 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-scripts\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415856 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415895 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-ovsdbserver-sb\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.415989 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-openstack-edpm-ipam\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.416016 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-dns-svc\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.416048 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8ppz\" (UniqueName: \"kubernetes.io/projected/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-kube-api-access-v8ppz\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.424305 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5767ddb7c-mbpns"] Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545030 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545144 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-config\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545311 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-scripts\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545443 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545565 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-ovsdbserver-sb\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545879 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-openstack-edpm-ipam\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.545966 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-dns-svc\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546082 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8ppz\" (UniqueName: \"kubernetes.io/projected/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-kube-api-access-v8ppz\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546320 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-ovsdbserver-nb\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546400 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-etc-machine-id\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546477 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-logs\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546556 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-dns-swift-storage-0\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546598 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r5pm\" (UniqueName: \"kubernetes.io/projected/a141634c-9846-4d10-89f0-a5a28a50d016-kube-api-access-6r5pm\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.546646 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data-custom\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.638277 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.560013 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-etc-machine-id\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.564844 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-dns-svc\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.569015 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-config\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.571720 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-ovsdbserver-sb\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.572448 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-ovsdbserver-nb\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.573270 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-dns-swift-storage-0\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.573639 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-logs\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.580070 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.592085 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8ppz\" (UniqueName: \"kubernetes.io/projected/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-kube-api-access-v8ppz\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.602627 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data-custom\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.634352 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r5pm\" (UniqueName: \"kubernetes.io/projected/a141634c-9846-4d10-89f0-a5a28a50d016-kube-api-access-6r5pm\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.553138 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a141634c-9846-4d10-89f0-a5a28a50d016-openstack-edpm-ipam\") pod \"dnsmasq-dns-5767ddb7c-mbpns\" (UID: \"a141634c-9846-4d10-89f0-a5a28a50d016\") " pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.642471 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.643193 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-scripts\") pod \"manila-api-0\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " pod="openstack/manila-api-0" Nov 21 15:01:19 crc kubenswrapper[4904]: I1121 15:01:19.653495 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.511849 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m7svb" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="registry-server" probeResult="failure" output=< Nov 21 15:01:20 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:01:20 crc kubenswrapper[4904]: > Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.596498 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5767ddb7c-mbpns"] Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.596769 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.597385 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.676927 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" event={"ID":"a141634c-9846-4d10-89f0-a5a28a50d016","Type":"ContainerStarted","Data":"a1673646a5a249c4f314cb23dc1efe3dae9d4286a9c5cada48e7d8a95dac272f"} Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.681019 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ae73f5d3-0420-4d34-9f4d-32aa3881f619","Type":"ContainerStarted","Data":"9d4aa1096b76d647d44ec0cbb2ec13773226a64bd2b3633598057790a3999e51"} Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.691288 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"249ca4eb-592e-4a74-be4e-8eaa9bc7e882","Type":"ContainerStarted","Data":"3e43384b366bc3861c6fbc8814d3eaff3766ac3d8232290e165d95e8b4648a5b"} Nov 21 15:01:20 crc kubenswrapper[4904]: I1121 15:01:20.989121 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:21 crc kubenswrapper[4904]: I1121 15:01:21.707009 4904 generic.go:334] "Generic (PLEG): container finished" podID="a141634c-9846-4d10-89f0-a5a28a50d016" containerID="53cec2759dfc12b76830b368762ddc0c1845c04371766d2e01fb60258cf44088" exitCode=0 Nov 21 15:01:21 crc kubenswrapper[4904]: I1121 15:01:21.707561 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" event={"ID":"a141634c-9846-4d10-89f0-a5a28a50d016","Type":"ContainerDied","Data":"53cec2759dfc12b76830b368762ddc0c1845c04371766d2e01fb60258cf44088"} Nov 21 15:01:21 crc kubenswrapper[4904]: I1121 15:01:21.711376 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"dc7b595e-6329-4ba7-9caa-f51cc7e4666e","Type":"ContainerStarted","Data":"334c15f7885215c48a456d99e286dfc5a786cbbbba556c41dbac7e1ef5a2bfac"} Nov 21 15:01:22 crc kubenswrapper[4904]: I1121 15:01:22.772367 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"dc7b595e-6329-4ba7-9caa-f51cc7e4666e","Type":"ContainerStarted","Data":"0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c"} Nov 21 15:01:22 crc kubenswrapper[4904]: I1121 15:01:22.984886 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.787453 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"249ca4eb-592e-4a74-be4e-8eaa9bc7e882","Type":"ContainerStarted","Data":"a3698cd4135519d91e849ceecaa325c0a1cec82f02c4a1e69991d7feee19c7b7"} Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.813197 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" event={"ID":"a141634c-9846-4d10-89f0-a5a28a50d016","Type":"ContainerStarted","Data":"242c45ba340281658ba365d65f1800caf65b19b02450c47700f319770db0ad9c"} Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.813764 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.820370 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"dc7b595e-6329-4ba7-9caa-f51cc7e4666e","Type":"ContainerStarted","Data":"928c8804faa1037f37cefb6fa2221d93fbc03ee7826934f430fa8e38aa171182"} Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.820634 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api-log" containerID="cri-o://0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c" gracePeriod=30 Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.820996 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.821045 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api" containerID="cri-o://928c8804faa1037f37cefb6fa2221d93fbc03ee7826934f430fa8e38aa171182" gracePeriod=30 Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.845717 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" podStartSLOduration=4.845644104 podStartE2EDuration="4.845644104s" podCreationTimestamp="2025-11-21 15:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:23.838551012 +0000 UTC m=+5357.960083564" watchObservedRunningTime="2025-11-21 15:01:23.845644104 +0000 UTC m=+5357.967176656" Nov 21 15:01:23 crc kubenswrapper[4904]: I1121 15:01:23.884761 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.884721119 podStartE2EDuration="4.884721119s" podCreationTimestamp="2025-11-21 15:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:23.858223118 +0000 UTC m=+5357.979755680" watchObservedRunningTime="2025-11-21 15:01:23.884721119 +0000 UTC m=+5358.006253681" Nov 21 15:01:24 crc kubenswrapper[4904]: W1121 15:01:24.082750 4904 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda141634c_9846_4d10_89f0_a5a28a50d016.slice/crio-conmon-53cec2759dfc12b76830b368762ddc0c1845c04371766d2e01fb60258cf44088.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda141634c_9846_4d10_89f0_a5a28a50d016.slice/crio-conmon-53cec2759dfc12b76830b368762ddc0c1845c04371766d2e01fb60258cf44088.scope: no such file or directory Nov 21 15:01:24 crc kubenswrapper[4904]: W1121 15:01:24.082876 4904 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda141634c_9846_4d10_89f0_a5a28a50d016.slice/crio-53cec2759dfc12b76830b368762ddc0c1845c04371766d2e01fb60258cf44088.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda141634c_9846_4d10_89f0_a5a28a50d016.slice/crio-53cec2759dfc12b76830b368762ddc0c1845c04371766d2e01fb60258cf44088.scope: no such file or directory Nov 21 15:01:24 crc kubenswrapper[4904]: W1121 15:01:24.083117 4904 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7b595e_6329_4ba7_9caa_f51cc7e4666e.slice/crio-conmon-0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7b595e_6329_4ba7_9caa_f51cc7e4666e.slice/crio-conmon-0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c.scope: no such file or directory Nov 21 15:01:24 crc kubenswrapper[4904]: W1121 15:01:24.083158 4904 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7b595e_6329_4ba7_9caa_f51cc7e4666e.slice/crio-0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7b595e_6329_4ba7_9caa_f51cc7e4666e.slice/crio-0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c.scope: no such file or directory Nov 21 15:01:24 crc kubenswrapper[4904]: E1121 15:01:24.806229 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61534ca3_fad5_4d18_80cf_0331ec454e86.slice/crio-526b365148baa9681efc342f8d4c2d5626f37267b829536580af4e0abb372097.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb824537a_619a_48a7_bf66_4e9639582be0.slice/crio-ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb824537a_619a_48a7_bf66_4e9639582be0.slice/crio-730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb824537a_619a_48a7_bf66_4e9639582be0.slice/crio-conmon-ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb824537a_619a_48a7_bf66_4e9639582be0.slice/crio-conmon-730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4.scope\": RecentStats: unable to find data in memory cache]" Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.843726 4904 generic.go:334] "Generic (PLEG): container finished" podID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerID="526b365148baa9681efc342f8d4c2d5626f37267b829536580af4e0abb372097" exitCode=137 Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.843759 4904 generic.go:334] "Generic (PLEG): container finished" podID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerID="1947a59e53af66a1e0265c9db38ddb6282596e889511062ac26cb2f5ee158d1f" exitCode=137 Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.843806 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5768cf7b89-vmm4n" event={"ID":"61534ca3-fad5-4d18-80cf-0331ec454e86","Type":"ContainerDied","Data":"526b365148baa9681efc342f8d4c2d5626f37267b829536580af4e0abb372097"} Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.843840 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5768cf7b89-vmm4n" event={"ID":"61534ca3-fad5-4d18-80cf-0331ec454e86","Type":"ContainerDied","Data":"1947a59e53af66a1e0265c9db38ddb6282596e889511062ac26cb2f5ee158d1f"} Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.851156 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"249ca4eb-592e-4a74-be4e-8eaa9bc7e882","Type":"ContainerStarted","Data":"1c91754c6f7efb5910f43a5f8b35ac5d3323a2660b0860cec15de27c81cc928c"} Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.876486 4904 generic.go:334] "Generic (PLEG): container finished" podID="b824537a-619a-48a7-bf66-4e9639582be0" containerID="730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4" exitCode=137 Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.876521 4904 generic.go:334] "Generic (PLEG): container finished" podID="b824537a-619a-48a7-bf66-4e9639582be0" containerID="ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77" exitCode=137 Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.876601 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88ff5bcc-k5zxp" event={"ID":"b824537a-619a-48a7-bf66-4e9639582be0","Type":"ContainerDied","Data":"730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4"} Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.876661 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88ff5bcc-k5zxp" event={"ID":"b824537a-619a-48a7-bf66-4e9639582be0","Type":"ContainerDied","Data":"ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77"} Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.890627 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=5.307231305 podStartE2EDuration="6.890603193s" podCreationTimestamp="2025-11-21 15:01:18 +0000 UTC" firstStartedPulling="2025-11-21 15:01:20.610276187 +0000 UTC m=+5354.731808739" lastFinishedPulling="2025-11-21 15:01:22.193648075 +0000 UTC m=+5356.315180627" observedRunningTime="2025-11-21 15:01:24.881460741 +0000 UTC m=+5359.002993293" watchObservedRunningTime="2025-11-21 15:01:24.890603193 +0000 UTC m=+5359.012135745" Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.891468 4904 generic.go:334] "Generic (PLEG): container finished" podID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerID="928c8804faa1037f37cefb6fa2221d93fbc03ee7826934f430fa8e38aa171182" exitCode=0 Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.891488 4904 generic.go:334] "Generic (PLEG): container finished" podID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerID="0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c" exitCode=143 Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.892545 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"dc7b595e-6329-4ba7-9caa-f51cc7e4666e","Type":"ContainerDied","Data":"928c8804faa1037f37cefb6fa2221d93fbc03ee7826934f430fa8e38aa171182"} Nov 21 15:01:24 crc kubenswrapper[4904]: I1121 15:01:24.892615 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"dc7b595e-6329-4ba7-9caa-f51cc7e4666e","Type":"ContainerDied","Data":"0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c"} Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.594498 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.606146 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.612592 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.720728 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data-custom\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.720785 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-etc-machine-id\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.720849 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61534ca3-fad5-4d18-80cf-0331ec454e86-horizon-secret-key\") pod \"61534ca3-fad5-4d18-80cf-0331ec454e86\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.720876 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t6zz\" (UniqueName: \"kubernetes.io/projected/b824537a-619a-48a7-bf66-4e9639582be0-kube-api-access-7t6zz\") pod \"b824537a-619a-48a7-bf66-4e9639582be0\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.720896 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-scripts\") pod \"61534ca3-fad5-4d18-80cf-0331ec454e86\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.720947 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-scripts\") pod \"b824537a-619a-48a7-bf66-4e9639582be0\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721008 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-combined-ca-bundle\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721029 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-config-data\") pod \"61534ca3-fad5-4d18-80cf-0331ec454e86\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721094 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-scripts\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721167 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b824537a-619a-48a7-bf66-4e9639582be0-logs\") pod \"b824537a-619a-48a7-bf66-4e9639582be0\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721189 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b824537a-619a-48a7-bf66-4e9639582be0-horizon-secret-key\") pod \"b824537a-619a-48a7-bf66-4e9639582be0\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721229 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61534ca3-fad5-4d18-80cf-0331ec454e86-logs\") pod \"61534ca3-fad5-4d18-80cf-0331ec454e86\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721250 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-config-data\") pod \"b824537a-619a-48a7-bf66-4e9639582be0\" (UID: \"b824537a-619a-48a7-bf66-4e9639582be0\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721267 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721291 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chsh8\" (UniqueName: \"kubernetes.io/projected/61534ca3-fad5-4d18-80cf-0331ec454e86-kube-api-access-chsh8\") pod \"61534ca3-fad5-4d18-80cf-0331ec454e86\" (UID: \"61534ca3-fad5-4d18-80cf-0331ec454e86\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721361 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8ppz\" (UniqueName: \"kubernetes.io/projected/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-kube-api-access-v8ppz\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.721475 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-logs\") pod \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\" (UID: \"dc7b595e-6329-4ba7-9caa-f51cc7e4666e\") " Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.723354 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-logs" (OuterVolumeSpecName: "logs") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.723520 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.725025 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61534ca3-fad5-4d18-80cf-0331ec454e86-logs" (OuterVolumeSpecName: "logs") pod "61534ca3-fad5-4d18-80cf-0331ec454e86" (UID: "61534ca3-fad5-4d18-80cf-0331ec454e86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.725258 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b824537a-619a-48a7-bf66-4e9639582be0-logs" (OuterVolumeSpecName: "logs") pod "b824537a-619a-48a7-bf66-4e9639582be0" (UID: "b824537a-619a-48a7-bf66-4e9639582be0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.734447 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61534ca3-fad5-4d18-80cf-0331ec454e86-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "61534ca3-fad5-4d18-80cf-0331ec454e86" (UID: "61534ca3-fad5-4d18-80cf-0331ec454e86"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.743056 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b824537a-619a-48a7-bf66-4e9639582be0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b824537a-619a-48a7-bf66-4e9639582be0" (UID: "b824537a-619a-48a7-bf66-4e9639582be0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.748574 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b824537a-619a-48a7-bf66-4e9639582be0-kube-api-access-7t6zz" (OuterVolumeSpecName: "kube-api-access-7t6zz") pod "b824537a-619a-48a7-bf66-4e9639582be0" (UID: "b824537a-619a-48a7-bf66-4e9639582be0"). InnerVolumeSpecName "kube-api-access-7t6zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.748972 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-scripts" (OuterVolumeSpecName: "scripts") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.749297 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-kube-api-access-v8ppz" (OuterVolumeSpecName: "kube-api-access-v8ppz") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "kube-api-access-v8ppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.768017 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.768881 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61534ca3-fad5-4d18-80cf-0331ec454e86-kube-api-access-chsh8" (OuterVolumeSpecName: "kube-api-access-chsh8") pod "61534ca3-fad5-4d18-80cf-0331ec454e86" (UID: "61534ca3-fad5-4d18-80cf-0331ec454e86"). InnerVolumeSpecName "kube-api-access-chsh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.798105 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-scripts" (OuterVolumeSpecName: "scripts") pod "b824537a-619a-48a7-bf66-4e9639582be0" (UID: "b824537a-619a-48a7-bf66-4e9639582be0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.816741 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-config-data" (OuterVolumeSpecName: "config-data") pod "b824537a-619a-48a7-bf66-4e9639582be0" (UID: "b824537a-619a-48a7-bf66-4e9639582be0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.833683 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/61534ca3-fad5-4d18-80cf-0331ec454e86-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.833742 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t6zz\" (UniqueName: \"kubernetes.io/projected/b824537a-619a-48a7-bf66-4e9639582be0-kube-api-access-7t6zz\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.833809 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.833824 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.833859 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b824537a-619a-48a7-bf66-4e9639582be0-logs\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.833870 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b824537a-619a-48a7-bf66-4e9639582be0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834104 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61534ca3-fad5-4d18-80cf-0331ec454e86-logs\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834291 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b824537a-619a-48a7-bf66-4e9639582be0-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834307 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chsh8\" (UniqueName: \"kubernetes.io/projected/61534ca3-fad5-4d18-80cf-0331ec454e86-kube-api-access-chsh8\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834318 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8ppz\" (UniqueName: \"kubernetes.io/projected/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-kube-api-access-v8ppz\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834405 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-logs\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834420 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.834435 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.836647 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.837997 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-scripts" (OuterVolumeSpecName: "scripts") pod "61534ca3-fad5-4d18-80cf-0331ec454e86" (UID: "61534ca3-fad5-4d18-80cf-0331ec454e86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.859401 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-config-data" (OuterVolumeSpecName: "config-data") pod "61534ca3-fad5-4d18-80cf-0331ec454e86" (UID: "61534ca3-fad5-4d18-80cf-0331ec454e86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.873819 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data" (OuterVolumeSpecName: "config-data") pod "dc7b595e-6329-4ba7-9caa-f51cc7e4666e" (UID: "dc7b595e-6329-4ba7-9caa-f51cc7e4666e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.916730 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"dc7b595e-6329-4ba7-9caa-f51cc7e4666e","Type":"ContainerDied","Data":"334c15f7885215c48a456d99e286dfc5a786cbbbba556c41dbac7e1ef5a2bfac"} Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.916822 4904 scope.go:117] "RemoveContainer" containerID="928c8804faa1037f37cefb6fa2221d93fbc03ee7826934f430fa8e38aa171182" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.917089 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.929075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5768cf7b89-vmm4n" event={"ID":"61534ca3-fad5-4d18-80cf-0331ec454e86","Type":"ContainerDied","Data":"dbcd5113605f69e179b54d42ca12914fa48c7b8582e8ef153b698bbceda1d652"} Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.929099 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5768cf7b89-vmm4n" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.938409 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.938456 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.938473 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61534ca3-fad5-4d18-80cf-0331ec454e86-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.938512 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7b595e-6329-4ba7-9caa-f51cc7e4666e-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.943481 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c88ff5bcc-k5zxp" event={"ID":"b824537a-619a-48a7-bf66-4e9639582be0","Type":"ContainerDied","Data":"2b004c0ed7aceae34372834c50268f6dc5e41524eea77b61cd7ed6b07b167ad4"} Nov 21 15:01:25 crc kubenswrapper[4904]: I1121 15:01:25.943528 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c88ff5bcc-k5zxp" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.019095 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.022165 4904 scope.go:117] "RemoveContainer" containerID="0ec36a7948be90353cf694f2f8f452f13e4d8220238b164a25405d84ad41b26c" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.044311 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.067502 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:26 crc kubenswrapper[4904]: E1121 15:01:26.068650 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.068701 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api" Nov 21 15:01:26 crc kubenswrapper[4904]: E1121 15:01:26.068722 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.068730 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon-log" Nov 21 15:01:26 crc kubenswrapper[4904]: E1121 15:01:26.068779 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.068789 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon" Nov 21 15:01:26 crc kubenswrapper[4904]: E1121 15:01:26.068821 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.068828 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon" Nov 21 15:01:26 crc kubenswrapper[4904]: E1121 15:01:26.068842 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.068850 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api-log" Nov 21 15:01:26 crc kubenswrapper[4904]: E1121 15:01:26.068883 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.068889 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.069444 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.069475 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.069488 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" containerName="horizon-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.069514 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.069536 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b824537a-619a-48a7-bf66-4e9639582be0" containerName="horizon-log" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.069557 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" containerName="manila-api" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.081943 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.090591 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.091455 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.091648 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.096793 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.110803 4904 scope.go:117] "RemoveContainer" containerID="526b365148baa9681efc342f8d4c2d5626f37267b829536580af4e0abb372097" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.123092 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5768cf7b89-vmm4n"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.145187 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5768cf7b89-vmm4n"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.166184 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-c88ff5bcc-k5zxp"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172529 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a0b60d4-f457-4e29-bb0e-f244826249aa-logs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172689 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-config-data\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172742 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172773 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-scripts\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172843 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpw7\" (UniqueName: \"kubernetes.io/projected/9a0b60d4-f457-4e29-bb0e-f244826249aa-kube-api-access-9vpw7\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172892 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-public-tls-certs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172938 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0b60d4-f457-4e29-bb0e-f244826249aa-etc-machine-id\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172965 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-config-data-custom\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.172997 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.179520 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-c88ff5bcc-k5zxp"] Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274345 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0b60d4-f457-4e29-bb0e-f244826249aa-etc-machine-id\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274402 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-config-data-custom\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274433 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274502 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a0b60d4-f457-4e29-bb0e-f244826249aa-logs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274557 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-config-data\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274589 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274615 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-scripts\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274658 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vpw7\" (UniqueName: \"kubernetes.io/projected/9a0b60d4-f457-4e29-bb0e-f244826249aa-kube-api-access-9vpw7\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.274727 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-public-tls-certs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.275035 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0b60d4-f457-4e29-bb0e-f244826249aa-etc-machine-id\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.275218 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a0b60d4-f457-4e29-bb0e-f244826249aa-logs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.282464 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-config-data-custom\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.282501 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-internal-tls-certs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.282929 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-public-tls-certs\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.283170 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-config-data\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.283570 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-scripts\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.286044 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0b60d4-f457-4e29-bb0e-f244826249aa-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.293217 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vpw7\" (UniqueName: \"kubernetes.io/projected/9a0b60d4-f457-4e29-bb0e-f244826249aa-kube-api-access-9vpw7\") pod \"manila-api-0\" (UID: \"9a0b60d4-f457-4e29-bb0e-f244826249aa\") " pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.366977 4904 scope.go:117] "RemoveContainer" containerID="1947a59e53af66a1e0265c9db38ddb6282596e889511062ac26cb2f5ee158d1f" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.414746 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.502096 4904 scope.go:117] "RemoveContainer" containerID="730e4f29e295d35e6c8a12976923eb16e6c96af69b3d24b78324d55b387894a4" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.537170 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61534ca3-fad5-4d18-80cf-0331ec454e86" path="/var/lib/kubelet/pods/61534ca3-fad5-4d18-80cf-0331ec454e86/volumes" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.538447 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b824537a-619a-48a7-bf66-4e9639582be0" path="/var/lib/kubelet/pods/b824537a-619a-48a7-bf66-4e9639582be0/volumes" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.539743 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7b595e-6329-4ba7-9caa-f51cc7e4666e" path="/var/lib/kubelet/pods/dc7b595e-6329-4ba7-9caa-f51cc7e4666e/volumes" Nov 21 15:01:26 crc kubenswrapper[4904]: I1121 15:01:26.773630 4904 scope.go:117] "RemoveContainer" containerID="ba19146ff6ef5776fa29e2b0bd8420cc110114affaedf76368924de7a15c1d77" Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.209778 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.823248 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.824220 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="proxy-httpd" containerID="cri-o://d9f1e4693f5d7fed3be2bb117adb9b5711be39b1f55297d212b3a2765e461af4" gracePeriod=30 Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.824276 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="sg-core" containerID="cri-o://541a429f6b17d86b76d18a5133956d6ece21a0fd68eac103961356ba320e88d2" gracePeriod=30 Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.824321 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-central-agent" containerID="cri-o://b996f01dcb6130895e201da08f4b964506c7b516a81d278f9ee1c145dff75de4" gracePeriod=30 Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.824667 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-notification-agent" containerID="cri-o://44fd4c3d9b1d74a1133573bf946c11085e57c27226829507075ba495bc07c644" gracePeriod=30 Nov 21 15:01:27 crc kubenswrapper[4904]: I1121 15:01:27.997946 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9a0b60d4-f457-4e29-bb0e-f244826249aa","Type":"ContainerStarted","Data":"b627f135e1c7fc685e747203e4a132ba1a41ecb554c3c027ed6218c1f9c1da69"} Nov 21 15:01:28 crc kubenswrapper[4904]: I1121 15:01:28.007236 4904 generic.go:334] "Generic (PLEG): container finished" podID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerID="541a429f6b17d86b76d18a5133956d6ece21a0fd68eac103961356ba320e88d2" exitCode=2 Nov 21 15:01:28 crc kubenswrapper[4904]: I1121 15:01:28.007285 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerDied","Data":"541a429f6b17d86b76d18a5133956d6ece21a0fd68eac103961356ba320e88d2"} Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.023812 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9a0b60d4-f457-4e29-bb0e-f244826249aa","Type":"ContainerStarted","Data":"48531003de94e5bfe07f2f58a39f3e2a85ea9b5eb8c42801c516af0f65279867"} Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.024620 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.024632 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9a0b60d4-f457-4e29-bb0e-f244826249aa","Type":"ContainerStarted","Data":"96ada594cb86bcbdef9f81c3a183794c910a49c12e3ef663157569a23cb510de"} Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.028550 4904 generic.go:334] "Generic (PLEG): container finished" podID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerID="b996f01dcb6130895e201da08f4b964506c7b516a81d278f9ee1c145dff75de4" exitCode=0 Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.028598 4904 generic.go:334] "Generic (PLEG): container finished" podID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerID="d9f1e4693f5d7fed3be2bb117adb9b5711be39b1f55297d212b3a2765e461af4" exitCode=0 Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.028612 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerDied","Data":"b996f01dcb6130895e201da08f4b964506c7b516a81d278f9ee1c145dff75de4"} Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.028743 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerDied","Data":"d9f1e4693f5d7fed3be2bb117adb9b5711be39b1f55297d212b3a2765e461af4"} Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.054226 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.054203554 podStartE2EDuration="4.054203554s" podCreationTimestamp="2025-11-21 15:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:29.04661401 +0000 UTC m=+5363.168146582" watchObservedRunningTime="2025-11-21 15:01:29.054203554 +0000 UTC m=+5363.175736106" Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.195776 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.420101 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.508766 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.644826 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5767ddb7c-mbpns" Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.720655 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-wh2vm"] Nov 21 15:01:29 crc kubenswrapper[4904]: I1121 15:01:29.720996 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="dnsmasq-dns" containerID="cri-o://a6208f9d9b7ca7542f9fb0dedbc74bee9651492fe0f31e1faa9925789fffc999" gracePeriod=10 Nov 21 15:01:30 crc kubenswrapper[4904]: I1121 15:01:30.045931 4904 generic.go:334] "Generic (PLEG): container finished" podID="da697e71-847e-48aa-8210-3974862d9deb" containerID="a6208f9d9b7ca7542f9fb0dedbc74bee9651492fe0f31e1faa9925789fffc999" exitCode=0 Nov 21 15:01:30 crc kubenswrapper[4904]: I1121 15:01:30.047009 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" event={"ID":"da697e71-847e-48aa-8210-3974862d9deb","Type":"ContainerDied","Data":"a6208f9d9b7ca7542f9fb0dedbc74bee9651492fe0f31e1faa9925789fffc999"} Nov 21 15:01:30 crc kubenswrapper[4904]: I1121 15:01:30.192872 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7svb"] Nov 21 15:01:31 crc kubenswrapper[4904]: I1121 15:01:31.067598 4904 generic.go:334] "Generic (PLEG): container finished" podID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerID="44fd4c3d9b1d74a1133573bf946c11085e57c27226829507075ba495bc07c644" exitCode=0 Nov 21 15:01:31 crc kubenswrapper[4904]: I1121 15:01:31.068291 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m7svb" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="registry-server" containerID="cri-o://d4a7733016d514f6f16a3b974c3441ca10f477764cc847cad61d8b499e5821a2" gracePeriod=2 Nov 21 15:01:31 crc kubenswrapper[4904]: I1121 15:01:31.068998 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerDied","Data":"44fd4c3d9b1d74a1133573bf946c11085e57c27226829507075ba495bc07c644"} Nov 21 15:01:32 crc kubenswrapper[4904]: I1121 15:01:32.090063 4904 generic.go:334] "Generic (PLEG): container finished" podID="f48c1752-9d41-4564-bd95-577df0c3d210" containerID="d4a7733016d514f6f16a3b974c3441ca10f477764cc847cad61d8b499e5821a2" exitCode=0 Nov 21 15:01:32 crc kubenswrapper[4904]: I1121 15:01:32.090156 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerDied","Data":"d4a7733016d514f6f16a3b974c3441ca10f477764cc847cad61d8b499e5821a2"} Nov 21 15:01:32 crc kubenswrapper[4904]: I1121 15:01:32.279488 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:01:32 crc kubenswrapper[4904]: I1121 15:01:32.312764 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:01:32 crc kubenswrapper[4904]: I1121 15:01:32.499350 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.255:5353: connect: connection refused" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.134965 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.236953 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-catalog-content\") pod \"f48c1752-9d41-4564-bd95-577df0c3d210\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.237139 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-utilities\") pod \"f48c1752-9d41-4564-bd95-577df0c3d210\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.238277 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqzkq\" (UniqueName: \"kubernetes.io/projected/f48c1752-9d41-4564-bd95-577df0c3d210-kube-api-access-jqzkq\") pod \"f48c1752-9d41-4564-bd95-577df0c3d210\" (UID: \"f48c1752-9d41-4564-bd95-577df0c3d210\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.241184 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-utilities" (OuterVolumeSpecName: "utilities") pod "f48c1752-9d41-4564-bd95-577df0c3d210" (UID: "f48c1752-9d41-4564-bd95-577df0c3d210"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.268388 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f48c1752-9d41-4564-bd95-577df0c3d210-kube-api-access-jqzkq" (OuterVolumeSpecName: "kube-api-access-jqzkq") pod "f48c1752-9d41-4564-bd95-577df0c3d210" (UID: "f48c1752-9d41-4564-bd95-577df0c3d210"). InnerVolumeSpecName "kube-api-access-jqzkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.336973 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f48c1752-9d41-4564-bd95-577df0c3d210" (UID: "f48c1752-9d41-4564-bd95-577df0c3d210"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.345442 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqzkq\" (UniqueName: \"kubernetes.io/projected/f48c1752-9d41-4564-bd95-577df0c3d210-kube-api-access-jqzkq\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.345484 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.345494 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f48c1752-9d41-4564-bd95-577df0c3d210-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.426507 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550127 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-combined-ca-bundle\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550245 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-log-httpd\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550323 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-scripts\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550364 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-config-data\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550419 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-sg-core-conf-yaml\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550464 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-run-httpd\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550691 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxcm9\" (UniqueName: \"kubernetes.io/projected/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-kube-api-access-hxcm9\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.550744 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-ceilometer-tls-certs\") pod \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\" (UID: \"5f974c52-ff88-48b3-b3c4-fb2bca5201fd\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.553604 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.554132 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.561850 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-scripts" (OuterVolumeSpecName: "scripts") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.570152 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-kube-api-access-hxcm9" (OuterVolumeSpecName: "kube-api-access-hxcm9") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "kube-api-access-hxcm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.608446 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.615537 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.658462 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.667370 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.667414 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.667433 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.667453 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxcm9\" (UniqueName: \"kubernetes.io/projected/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-kube-api-access-hxcm9\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.693823 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.768956 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-nb\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.769044 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-svc\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.769084 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-sb\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.769225 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-openstack-edpm-ipam\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.769371 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2ntm\" (UniqueName: \"kubernetes.io/projected/da697e71-847e-48aa-8210-3974862d9deb-kube-api-access-k2ntm\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.769496 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-swift-storage-0\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.769523 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-config\") pod \"da697e71-847e-48aa-8210-3974862d9deb\" (UID: \"da697e71-847e-48aa-8210-3974862d9deb\") " Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.770052 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.775502 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.782222 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da697e71-847e-48aa-8210-3974862d9deb-kube-api-access-k2ntm" (OuterVolumeSpecName: "kube-api-access-k2ntm") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "kube-api-access-k2ntm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.848614 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-config" (OuterVolumeSpecName: "config") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.860910 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.876191 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2ntm\" (UniqueName: \"kubernetes.io/projected/da697e71-847e-48aa-8210-3974862d9deb-kube-api-access-k2ntm\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.876288 4904 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-config\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.876303 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.876318 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.886078 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.887802 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.896131 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.914789 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da697e71-847e-48aa-8210-3974862d9deb" (UID: "da697e71-847e-48aa-8210-3974862d9deb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.934113 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-config-data" (OuterVolumeSpecName: "config-data") pod "5f974c52-ff88-48b3-b3c4-fb2bca5201fd" (UID: "5f974c52-ff88-48b3-b3c4-fb2bca5201fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.978873 4904 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.978925 4904 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.978936 4904 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.978951 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f974c52-ff88-48b3-b3c4-fb2bca5201fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:33 crc kubenswrapper[4904]: I1121 15:01:33.978961 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/da697e71-847e-48aa-8210-3974862d9deb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.133695 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m7svb" event={"ID":"f48c1752-9d41-4564-bd95-577df0c3d210","Type":"ContainerDied","Data":"d832504ad61dda23ee7ded73a6a0df599a08698793accadf63414a909a98318a"} Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.133750 4904 scope.go:117] "RemoveContainer" containerID="d4a7733016d514f6f16a3b974c3441ca10f477764cc847cad61d8b499e5821a2" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.133766 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m7svb" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.141172 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ae73f5d3-0420-4d34-9f4d-32aa3881f619","Type":"ContainerStarted","Data":"ff8f1bc762e20cc08021475c9c44fc99a08d5d8926ec654fb7dc4bf6b395dfd7"} Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.146553 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" event={"ID":"da697e71-847e-48aa-8210-3974862d9deb","Type":"ContainerDied","Data":"e68d040f8255c3cff70e88e4acd5413ee5e12fb9f30aa42954bfbf9b898aa1bf"} Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.146648 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6559847fc9-wh2vm" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.163862 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5f974c52-ff88-48b3-b3c4-fb2bca5201fd","Type":"ContainerDied","Data":"551c3892e8c876c67e9b234f9aadd79d90fa24fc0d1dc6fa1820f3cc5b9b7d43"} Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.164001 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.189885 4904 scope.go:117] "RemoveContainer" containerID="6aabe156e939f06c9a4c58f3d05676c449d7063a3ae53c294b8727b44199e19a" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.208932 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m7svb"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.232723 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m7svb"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.272625 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-wh2vm"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.273648 4904 scope.go:117] "RemoveContainer" containerID="b481498cf884f0e7621125971cc9505c4d79db8acf6a9543bd7f42556b17e5cf" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.295389 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6559847fc9-wh2vm"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.312546 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.319274 4904 scope.go:117] "RemoveContainer" containerID="a6208f9d9b7ca7542f9fb0dedbc74bee9651492fe0f31e1faa9925789fffc999" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.325751 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.338720 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339308 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="registry-server" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339332 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="registry-server" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339344 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-central-agent" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339352 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-central-agent" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339362 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="init" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339370 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="init" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339385 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-notification-agent" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339391 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-notification-agent" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339405 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="dnsmasq-dns" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339415 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="dnsmasq-dns" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339433 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="extract-content" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339439 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="extract-content" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339453 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="extract-utilities" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339459 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="extract-utilities" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339486 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="sg-core" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339494 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="sg-core" Nov 21 15:01:34 crc kubenswrapper[4904]: E1121 15:01:34.339511 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="proxy-httpd" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339517 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="proxy-httpd" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339776 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="sg-core" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339800 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-central-agent" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339818 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="proxy-httpd" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339829 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" containerName="ceilometer-notification-agent" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339844 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" containerName="registry-server" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.339856 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="da697e71-847e-48aa-8210-3974862d9deb" containerName="dnsmasq-dns" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.346160 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.351514 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.353016 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.353969 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.363327 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.394840 4904 scope.go:117] "RemoveContainer" containerID="cb764a2bac67a6184a682008df54717eeb6c2ab12bc669c554889f8bdfe38f92" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.502939 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503113 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8hnr\" (UniqueName: \"kubernetes.io/projected/a117ac6e-4da2-489b-ae58-1eabb354721f-kube-api-access-w8hnr\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503163 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-scripts\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503197 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503334 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-log-httpd\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503383 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503478 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-config-data\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.503520 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-run-httpd\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.528448 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f974c52-ff88-48b3-b3c4-fb2bca5201fd" path="/var/lib/kubelet/pods/5f974c52-ff88-48b3-b3c4-fb2bca5201fd/volumes" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.530934 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da697e71-847e-48aa-8210-3974862d9deb" path="/var/lib/kubelet/pods/da697e71-847e-48aa-8210-3974862d9deb/volumes" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.531755 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f48c1752-9d41-4564-bd95-577df0c3d210" path="/var/lib/kubelet/pods/f48c1752-9d41-4564-bd95-577df0c3d210/volumes" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606221 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-log-httpd\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606295 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606376 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-config-data\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606401 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-run-httpd\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606603 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606669 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8hnr\" (UniqueName: \"kubernetes.io/projected/a117ac6e-4da2-489b-ae58-1eabb354721f-kube-api-access-w8hnr\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606709 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-scripts\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.606728 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.607730 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-run-httpd\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.608212 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-log-httpd\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.639051 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:01:34 crc kubenswrapper[4904]: I1121 15:01:34.924115 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-695bd477bb-gcrxw" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.002445 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-648f66fff6-bp2z7"] Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.051963 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.058402 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.058592 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-scripts\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.068352 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-config-data\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.068233 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.073852 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8hnr\" (UniqueName: \"kubernetes.io/projected/a117ac6e-4da2-489b-ae58-1eabb354721f-kube-api-access-w8hnr\") pod \"ceilometer-0\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.114252 4904 scope.go:117] "RemoveContainer" containerID="b996f01dcb6130895e201da08f4b964506c7b516a81d278f9ee1c145dff75de4" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.194192 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.195553 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.201598 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ae73f5d3-0420-4d34-9f4d-32aa3881f619","Type":"ContainerStarted","Data":"a457c1eb9e1778721028284b8b304159735130174eb4ca78b210f6e7a39a3e8e"} Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.207280 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon-log" containerID="cri-o://fc01bcb4218b7b9864118de827a26def8b52df4c425fa708adcd7dd3befa74dd" gracePeriod=30 Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.207557 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" containerID="cri-o://11ceaf74ebc5a58e4025282426be09a7c7c72ef4a5297ca34829896518d140fa" gracePeriod=30 Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.245250 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=5.21494745 podStartE2EDuration="17.245223949s" podCreationTimestamp="2025-11-21 15:01:18 +0000 UTC" firstStartedPulling="2025-11-21 15:01:20.632795211 +0000 UTC m=+5354.754327763" lastFinishedPulling="2025-11-21 15:01:32.66307171 +0000 UTC m=+5366.784604262" observedRunningTime="2025-11-21 15:01:35.233412894 +0000 UTC m=+5369.354945446" watchObservedRunningTime="2025-11-21 15:01:35.245223949 +0000 UTC m=+5369.366756521" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.279553 4904 scope.go:117] "RemoveContainer" containerID="d9f1e4693f5d7fed3be2bb117adb9b5711be39b1f55297d212b3a2765e461af4" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.415486 4904 scope.go:117] "RemoveContainer" containerID="541a429f6b17d86b76d18a5133956d6ece21a0fd68eac103961356ba320e88d2" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.473116 4904 scope.go:117] "RemoveContainer" containerID="44fd4c3d9b1d74a1133573bf946c11085e57c27226829507075ba495bc07c644" Nov 21 15:01:35 crc kubenswrapper[4904]: I1121 15:01:35.807453 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:01:35 crc kubenswrapper[4904]: W1121 15:01:35.816335 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda117ac6e_4da2_489b_ae58_1eabb354721f.slice/crio-ed72ccc79b8fd58f56e56bfc0948ed24b14ef55962756bf6d44b229d164a16f2 WatchSource:0}: Error finding container ed72ccc79b8fd58f56e56bfc0948ed24b14ef55962756bf6d44b229d164a16f2: Status 404 returned error can't find the container with id ed72ccc79b8fd58f56e56bfc0948ed24b14ef55962756bf6d44b229d164a16f2 Nov 21 15:01:36 crc kubenswrapper[4904]: I1121 15:01:36.221423 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerStarted","Data":"ed72ccc79b8fd58f56e56bfc0948ed24b14ef55962756bf6d44b229d164a16f2"} Nov 21 15:01:37 crc kubenswrapper[4904]: I1121 15:01:37.238309 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerStarted","Data":"4e8d832bc36bfd0249073c8eeba89d15d140cd4a14570e1956a5933e153a6ef2"} Nov 21 15:01:38 crc kubenswrapper[4904]: I1121 15:01:38.251923 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerStarted","Data":"f4ef32037e6dd864e3c30fb9a9f05c0c8692a5ad3a4c757fad92a764d419a784"} Nov 21 15:01:39 crc kubenswrapper[4904]: I1121 15:01:39.147602 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.75:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.75:8443: connect: connection refused" Nov 21 15:01:39 crc kubenswrapper[4904]: I1121 15:01:39.253031 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 21 15:01:39 crc kubenswrapper[4904]: I1121 15:01:39.269399 4904 generic.go:334] "Generic (PLEG): container finished" podID="95cd8e52-9048-40c0-8f62-b36f39433908" containerID="11ceaf74ebc5a58e4025282426be09a7c7c72ef4a5297ca34829896518d140fa" exitCode=0 Nov 21 15:01:39 crc kubenswrapper[4904]: I1121 15:01:39.269481 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-648f66fff6-bp2z7" event={"ID":"95cd8e52-9048-40c0-8f62-b36f39433908","Type":"ContainerDied","Data":"11ceaf74ebc5a58e4025282426be09a7c7c72ef4a5297ca34829896518d140fa"} Nov 21 15:01:39 crc kubenswrapper[4904]: I1121 15:01:39.273209 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerStarted","Data":"bdfd0fbfee69728eb81dbc2022f6ccd9d6ec7dd8f4e50ae1a3f64010ce38e46f"} Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.150915 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.226443 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312026 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerStarted","Data":"8adf41e439a686d3665e7ff1236c13f6f981fda0009b0f59c650ebbb53766d06"} Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312339 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="probe" containerID="cri-o://1c91754c6f7efb5910f43a5f8b35ac5d3323a2660b0860cec15de27c81cc928c" gracePeriod=30 Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312513 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-notification-agent" containerID="cri-o://f4ef32037e6dd864e3c30fb9a9f05c0c8692a5ad3a4c757fad92a764d419a784" gracePeriod=30 Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312412 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-central-agent" containerID="cri-o://4e8d832bc36bfd0249073c8eeba89d15d140cd4a14570e1956a5933e153a6ef2" gracePeriod=30 Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312479 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="proxy-httpd" containerID="cri-o://8adf41e439a686d3665e7ff1236c13f6f981fda0009b0f59c650ebbb53766d06" gracePeriod=30 Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312466 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="sg-core" containerID="cri-o://bdfd0fbfee69728eb81dbc2022f6ccd9d6ec7dd8f4e50ae1a3f64010ce38e46f" gracePeriod=30 Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.312271 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="manila-scheduler" containerID="cri-o://a3698cd4135519d91e849ceecaa325c0a1cec82f02c4a1e69991d7feee19c7b7" gracePeriod=30 Nov 21 15:01:41 crc kubenswrapper[4904]: I1121 15:01:41.340442 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9340576499999997 podStartE2EDuration="7.340411046s" podCreationTimestamp="2025-11-21 15:01:34 +0000 UTC" firstStartedPulling="2025-11-21 15:01:35.82045575 +0000 UTC m=+5369.941988292" lastFinishedPulling="2025-11-21 15:01:40.226809136 +0000 UTC m=+5374.348341688" observedRunningTime="2025-11-21 15:01:41.339537455 +0000 UTC m=+5375.461069997" watchObservedRunningTime="2025-11-21 15:01:41.340411046 +0000 UTC m=+5375.461943598" Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.329910 4904 generic.go:334] "Generic (PLEG): container finished" podID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerID="1c91754c6f7efb5910f43a5f8b35ac5d3323a2660b0860cec15de27c81cc928c" exitCode=0 Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.330068 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"249ca4eb-592e-4a74-be4e-8eaa9bc7e882","Type":"ContainerDied","Data":"1c91754c6f7efb5910f43a5f8b35ac5d3323a2660b0860cec15de27c81cc928c"} Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.334292 4904 generic.go:334] "Generic (PLEG): container finished" podID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerID="bdfd0fbfee69728eb81dbc2022f6ccd9d6ec7dd8f4e50ae1a3f64010ce38e46f" exitCode=2 Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.334329 4904 generic.go:334] "Generic (PLEG): container finished" podID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerID="f4ef32037e6dd864e3c30fb9a9f05c0c8692a5ad3a4c757fad92a764d419a784" exitCode=0 Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.334342 4904 generic.go:334] "Generic (PLEG): container finished" podID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerID="4e8d832bc36bfd0249073c8eeba89d15d140cd4a14570e1956a5933e153a6ef2" exitCode=0 Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.334376 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerDied","Data":"bdfd0fbfee69728eb81dbc2022f6ccd9d6ec7dd8f4e50ae1a3f64010ce38e46f"} Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.334417 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerDied","Data":"f4ef32037e6dd864e3c30fb9a9f05c0c8692a5ad3a4c757fad92a764d419a784"} Nov 21 15:01:42 crc kubenswrapper[4904]: I1121 15:01:42.334436 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerDied","Data":"4e8d832bc36bfd0249073c8eeba89d15d140cd4a14570e1956a5933e153a6ef2"} Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.385834 4904 generic.go:334] "Generic (PLEG): container finished" podID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerID="a3698cd4135519d91e849ceecaa325c0a1cec82f02c4a1e69991d7feee19c7b7" exitCode=0 Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.386989 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"249ca4eb-592e-4a74-be4e-8eaa9bc7e882","Type":"ContainerDied","Data":"a3698cd4135519d91e849ceecaa325c0a1cec82f02c4a1e69991d7feee19c7b7"} Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.871792 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.982981 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-combined-ca-bundle\") pod \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.983530 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data\") pod \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.983787 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-scripts\") pod \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.983934 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data-custom\") pod \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.984057 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-etc-machine-id\") pod \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.984176 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtrk\" (UniqueName: \"kubernetes.io/projected/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-kube-api-access-cqtrk\") pod \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\" (UID: \"249ca4eb-592e-4a74-be4e-8eaa9bc7e882\") " Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.984169 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "249ca4eb-592e-4a74-be4e-8eaa9bc7e882" (UID: "249ca4eb-592e-4a74-be4e-8eaa9bc7e882"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.985610 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:45 crc kubenswrapper[4904]: I1121 15:01:45.992561 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "249ca4eb-592e-4a74-be4e-8eaa9bc7e882" (UID: "249ca4eb-592e-4a74-be4e-8eaa9bc7e882"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.007882 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-scripts" (OuterVolumeSpecName: "scripts") pod "249ca4eb-592e-4a74-be4e-8eaa9bc7e882" (UID: "249ca4eb-592e-4a74-be4e-8eaa9bc7e882"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.016695 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-kube-api-access-cqtrk" (OuterVolumeSpecName: "kube-api-access-cqtrk") pod "249ca4eb-592e-4a74-be4e-8eaa9bc7e882" (UID: "249ca4eb-592e-4a74-be4e-8eaa9bc7e882"). InnerVolumeSpecName "kube-api-access-cqtrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.067970 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "249ca4eb-592e-4a74-be4e-8eaa9bc7e882" (UID: "249ca4eb-592e-4a74-be4e-8eaa9bc7e882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.091902 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.091940 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.091950 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.091961 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqtrk\" (UniqueName: \"kubernetes.io/projected/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-kube-api-access-cqtrk\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.131671 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data" (OuterVolumeSpecName: "config-data") pod "249ca4eb-592e-4a74-be4e-8eaa9bc7e882" (UID: "249ca4eb-592e-4a74-be4e-8eaa9bc7e882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.194535 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/249ca4eb-592e-4a74-be4e-8eaa9bc7e882-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.400809 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"249ca4eb-592e-4a74-be4e-8eaa9bc7e882","Type":"ContainerDied","Data":"3e43384b366bc3861c6fbc8814d3eaff3766ac3d8232290e165d95e8b4648a5b"} Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.400869 4904 scope.go:117] "RemoveContainer" containerID="1c91754c6f7efb5910f43a5f8b35ac5d3323a2660b0860cec15de27c81cc928c" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.401016 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.453143 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.470353 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.488997 4904 scope.go:117] "RemoveContainer" containerID="a3698cd4135519d91e849ceecaa325c0a1cec82f02c4a1e69991d7feee19c7b7" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.489541 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:46 crc kubenswrapper[4904]: E1121 15:01:46.490283 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="manila-scheduler" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.490309 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="manila-scheduler" Nov 21 15:01:46 crc kubenswrapper[4904]: E1121 15:01:46.490369 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="probe" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.490377 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="probe" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.490644 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="probe" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.490685 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" containerName="manila-scheduler" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.492438 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.498622 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.558211 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="249ca4eb-592e-4a74-be4e-8eaa9bc7e882" path="/var/lib/kubelet/pods/249ca4eb-592e-4a74-be4e-8eaa9bc7e882/volumes" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.559739 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.604366 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.604462 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-scripts\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.604533 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ghn\" (UniqueName: \"kubernetes.io/projected/8bea7feb-21d5-4b01-98b3-5b16737f0274-kube-api-access-w2ghn\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.604555 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bea7feb-21d5-4b01-98b3-5b16737f0274-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.604679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-config-data\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.604812 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.707384 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.707479 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-scripts\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.707561 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ghn\" (UniqueName: \"kubernetes.io/projected/8bea7feb-21d5-4b01-98b3-5b16737f0274-kube-api-access-w2ghn\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.707582 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bea7feb-21d5-4b01-98b3-5b16737f0274-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.707676 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-config-data\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.707789 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.708645 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8bea7feb-21d5-4b01-98b3-5b16737f0274-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.713888 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.713888 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.714229 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-config-data\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.723530 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bea7feb-21d5-4b01-98b3-5b16737f0274-scripts\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.738388 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ghn\" (UniqueName: \"kubernetes.io/projected/8bea7feb-21d5-4b01-98b3-5b16737f0274-kube-api-access-w2ghn\") pod \"manila-scheduler-0\" (UID: \"8bea7feb-21d5-4b01-98b3-5b16737f0274\") " pod="openstack/manila-scheduler-0" Nov 21 15:01:46 crc kubenswrapper[4904]: I1121 15:01:46.834790 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Nov 21 15:01:47 crc kubenswrapper[4904]: I1121 15:01:47.371754 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Nov 21 15:01:47 crc kubenswrapper[4904]: W1121 15:01:47.382340 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bea7feb_21d5_4b01_98b3_5b16737f0274.slice/crio-f575b10a69a8c4fd0375ecd62ae0ed18167ce5b6d6d7c05821e3238101718151 WatchSource:0}: Error finding container f575b10a69a8c4fd0375ecd62ae0ed18167ce5b6d6d7c05821e3238101718151: Status 404 returned error can't find the container with id f575b10a69a8c4fd0375ecd62ae0ed18167ce5b6d6d7c05821e3238101718151 Nov 21 15:01:47 crc kubenswrapper[4904]: I1121 15:01:47.418690 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8bea7feb-21d5-4b01-98b3-5b16737f0274","Type":"ContainerStarted","Data":"f575b10a69a8c4fd0375ecd62ae0ed18167ce5b6d6d7c05821e3238101718151"} Nov 21 15:01:48 crc kubenswrapper[4904]: I1121 15:01:48.181842 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Nov 21 15:01:48 crc kubenswrapper[4904]: I1121 15:01:48.439844 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8bea7feb-21d5-4b01-98b3-5b16737f0274","Type":"ContainerStarted","Data":"0df67d2355d5c72fb41b04234d7bf113383442d8a1dc5ab372c692e8b4fc1164"} Nov 21 15:01:49 crc kubenswrapper[4904]: I1121 15:01:49.147525 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.75:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.75:8443: connect: connection refused" Nov 21 15:01:49 crc kubenswrapper[4904]: I1121 15:01:49.458317 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"8bea7feb-21d5-4b01-98b3-5b16737f0274","Type":"ContainerStarted","Data":"9154d91ee6fe765a73a6669cace19ffac0cddb4412b42011f00443d8d0204f6f"} Nov 21 15:01:49 crc kubenswrapper[4904]: I1121 15:01:49.477265 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.477242861 podStartE2EDuration="3.477242861s" podCreationTimestamp="2025-11-21 15:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:49.474682849 +0000 UTC m=+5383.596215421" watchObservedRunningTime="2025-11-21 15:01:49.477242861 +0000 UTC m=+5383.598775413" Nov 21 15:01:51 crc kubenswrapper[4904]: I1121 15:01:51.046541 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 21 15:01:51 crc kubenswrapper[4904]: I1121 15:01:51.138765 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:51 crc kubenswrapper[4904]: I1121 15:01:51.479367 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="manila-share" containerID="cri-o://ff8f1bc762e20cc08021475c9c44fc99a08d5d8926ec654fb7dc4bf6b395dfd7" gracePeriod=30 Nov 21 15:01:51 crc kubenswrapper[4904]: I1121 15:01:51.479936 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="probe" containerID="cri-o://a457c1eb9e1778721028284b8b304159735130174eb4ca78b210f6e7a39a3e8e" gracePeriod=30 Nov 21 15:01:52 crc kubenswrapper[4904]: I1121 15:01:52.495614 4904 generic.go:334] "Generic (PLEG): container finished" podID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerID="a457c1eb9e1778721028284b8b304159735130174eb4ca78b210f6e7a39a3e8e" exitCode=0 Nov 21 15:01:52 crc kubenswrapper[4904]: I1121 15:01:52.496165 4904 generic.go:334] "Generic (PLEG): container finished" podID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerID="ff8f1bc762e20cc08021475c9c44fc99a08d5d8926ec654fb7dc4bf6b395dfd7" exitCode=1 Nov 21 15:01:52 crc kubenswrapper[4904]: I1121 15:01:52.495727 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ae73f5d3-0420-4d34-9f4d-32aa3881f619","Type":"ContainerDied","Data":"a457c1eb9e1778721028284b8b304159735130174eb4ca78b210f6e7a39a3e8e"} Nov 21 15:01:52 crc kubenswrapper[4904]: I1121 15:01:52.496225 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ae73f5d3-0420-4d34-9f4d-32aa3881f619","Type":"ContainerDied","Data":"ff8f1bc762e20cc08021475c9c44fc99a08d5d8926ec654fb7dc4bf6b395dfd7"} Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.403756 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.514002 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"ae73f5d3-0420-4d34-9f4d-32aa3881f619","Type":"ContainerDied","Data":"9d4aa1096b76d647d44ec0cbb2ec13773226a64bd2b3633598057790a3999e51"} Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.514053 4904 scope.go:117] "RemoveContainer" containerID="a457c1eb9e1778721028284b8b304159735130174eb4ca78b210f6e7a39a3e8e" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.514255 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.542568 4904 scope.go:117] "RemoveContainer" containerID="ff8f1bc762e20cc08021475c9c44fc99a08d5d8926ec654fb7dc4bf6b395dfd7" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552182 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-etc-machine-id\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552272 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-266rm\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-kube-api-access-266rm\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552353 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-var-lib-manila\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552378 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552414 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-scripts\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552536 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data-custom\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552591 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552713 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-combined-ca-bundle\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552804 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.552964 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-ceph\") pod \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\" (UID: \"ae73f5d3-0420-4d34-9f4d-32aa3881f619\") " Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.554816 4904 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.554837 4904 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/ae73f5d3-0420-4d34-9f4d-32aa3881f619-var-lib-manila\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.564277 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-scripts" (OuterVolumeSpecName: "scripts") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.564592 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-ceph" (OuterVolumeSpecName: "ceph") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.564925 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.588755 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-kube-api-access-266rm" (OuterVolumeSpecName: "kube-api-access-266rm") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "kube-api-access-266rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.657089 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.657987 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.658042 4904 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-ceph\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.658058 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-266rm\" (UniqueName: \"kubernetes.io/projected/ae73f5d3-0420-4d34-9f4d-32aa3881f619-kube-api-access-266rm\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.658073 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.658089 4904 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.705211 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data" (OuterVolumeSpecName: "config-data") pod "ae73f5d3-0420-4d34-9f4d-32aa3881f619" (UID: "ae73f5d3-0420-4d34-9f4d-32aa3881f619"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.760869 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae73f5d3-0420-4d34-9f4d-32aa3881f619-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.856735 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.871393 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.895331 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:53 crc kubenswrapper[4904]: E1121 15:01:53.896082 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="manila-share" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.896109 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="manila-share" Nov 21 15:01:53 crc kubenswrapper[4904]: E1121 15:01:53.896147 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="probe" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.896157 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="probe" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.896556 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="probe" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.896606 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" containerName="manila-share" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.898344 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.901161 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.907203 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.966714 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnssk\" (UniqueName: \"kubernetes.io/projected/f2e2192d-1157-4025-9df6-deab99f244fd-kube-api-access-lnssk\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.967161 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2e2192d-1157-4025-9df6-deab99f244fd-ceph\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.967217 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.967284 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2e2192d-1157-4025-9df6-deab99f244fd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.967520 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.967887 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-config-data\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.968067 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-scripts\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:53 crc kubenswrapper[4904]: I1121 15:01:53.968122 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f2e2192d-1157-4025-9df6-deab99f244fd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.071194 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-scripts\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.071267 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f2e2192d-1157-4025-9df6-deab99f244fd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.071368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnssk\" (UniqueName: \"kubernetes.io/projected/f2e2192d-1157-4025-9df6-deab99f244fd-kube-api-access-lnssk\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.071506 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/f2e2192d-1157-4025-9df6-deab99f244fd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.071615 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2e2192d-1157-4025-9df6-deab99f244fd-ceph\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.071844 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.073007 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2e2192d-1157-4025-9df6-deab99f244fd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.073176 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.073428 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-config-data\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.074358 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2e2192d-1157-4025-9df6-deab99f244fd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.077862 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-config-data\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.080484 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2e2192d-1157-4025-9df6-deab99f244fd-ceph\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.080524 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.081384 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.084019 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e2192d-1157-4025-9df6-deab99f244fd-scripts\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.098474 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnssk\" (UniqueName: \"kubernetes.io/projected/f2e2192d-1157-4025-9df6-deab99f244fd-kube-api-access-lnssk\") pod \"manila-share-share1-0\" (UID: \"f2e2192d-1157-4025-9df6-deab99f244fd\") " pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.218675 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.534347 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae73f5d3-0420-4d34-9f4d-32aa3881f619" path="/var/lib/kubelet/pods/ae73f5d3-0420-4d34-9f4d-32aa3881f619/volumes" Nov 21 15:01:54 crc kubenswrapper[4904]: I1121 15:01:54.877187 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Nov 21 15:01:54 crc kubenswrapper[4904]: W1121 15:01:54.881021 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e2192d_1157_4025_9df6_deab99f244fd.slice/crio-97520dbcf6ad4de146e22c09b929c5f7c7d7f92fa9e056290a203ed8bb70e00c WatchSource:0}: Error finding container 97520dbcf6ad4de146e22c09b929c5f7c7d7f92fa9e056290a203ed8bb70e00c: Status 404 returned error can't find the container with id 97520dbcf6ad4de146e22c09b929c5f7c7d7f92fa9e056290a203ed8bb70e00c Nov 21 15:01:55 crc kubenswrapper[4904]: I1121 15:01:55.551155 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f2e2192d-1157-4025-9df6-deab99f244fd","Type":"ContainerStarted","Data":"97520dbcf6ad4de146e22c09b929c5f7c7d7f92fa9e056290a203ed8bb70e00c"} Nov 21 15:01:56 crc kubenswrapper[4904]: I1121 15:01:56.568441 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f2e2192d-1157-4025-9df6-deab99f244fd","Type":"ContainerStarted","Data":"777ca9c635b47015b1b48362520d09d90dc9efc16be2a07c4417c9484a471f56"} Nov 21 15:01:56 crc kubenswrapper[4904]: I1121 15:01:56.568934 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"f2e2192d-1157-4025-9df6-deab99f244fd","Type":"ContainerStarted","Data":"122826cd04fd80ecfe7700d98adf3d2fcf05d537261f785c96ac573ecec23284"} Nov 21 15:01:56 crc kubenswrapper[4904]: I1121 15:01:56.596140 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.596112191 podStartE2EDuration="3.596112191s" podCreationTimestamp="2025-11-21 15:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:01:56.587322468 +0000 UTC m=+5390.708855030" watchObservedRunningTime="2025-11-21 15:01:56.596112191 +0000 UTC m=+5390.717644753" Nov 21 15:01:56 crc kubenswrapper[4904]: I1121 15:01:56.836014 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Nov 21 15:01:59 crc kubenswrapper[4904]: I1121 15:01:59.148225 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-648f66fff6-bp2z7" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.75:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.75:8443: connect: connection refused" Nov 21 15:01:59 crc kubenswrapper[4904]: I1121 15:01:59.148874 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:02:04 crc kubenswrapper[4904]: I1121 15:02:04.219560 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Nov 21 15:02:05 crc kubenswrapper[4904]: I1121 15:02:05.196630 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 15:02:05 crc kubenswrapper[4904]: I1121 15:02:05.205559 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 21 15:02:06 crc kubenswrapper[4904]: I1121 15:02:06.714154 4904 generic.go:334] "Generic (PLEG): container finished" podID="95cd8e52-9048-40c0-8f62-b36f39433908" containerID="fc01bcb4218b7b9864118de827a26def8b52df4c425fa708adcd7dd3befa74dd" exitCode=137 Nov 21 15:02:06 crc kubenswrapper[4904]: I1121 15:02:06.715087 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-648f66fff6-bp2z7" event={"ID":"95cd8e52-9048-40c0-8f62-b36f39433908","Type":"ContainerDied","Data":"fc01bcb4218b7b9864118de827a26def8b52df4c425fa708adcd7dd3befa74dd"} Nov 21 15:02:06 crc kubenswrapper[4904]: I1121 15:02:06.936769 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.065510 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8kx9\" (UniqueName: \"kubernetes.io/projected/95cd8e52-9048-40c0-8f62-b36f39433908-kube-api-access-b8kx9\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.065586 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-config-data\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.065894 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cd8e52-9048-40c0-8f62-b36f39433908-logs\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.066017 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-scripts\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.066071 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-tls-certs\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.066154 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-secret-key\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.066228 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-combined-ca-bundle\") pod \"95cd8e52-9048-40c0-8f62-b36f39433908\" (UID: \"95cd8e52-9048-40c0-8f62-b36f39433908\") " Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.066348 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95cd8e52-9048-40c0-8f62-b36f39433908-logs" (OuterVolumeSpecName: "logs") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.067205 4904 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cd8e52-9048-40c0-8f62-b36f39433908-logs\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.072875 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.081019 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cd8e52-9048-40c0-8f62-b36f39433908-kube-api-access-b8kx9" (OuterVolumeSpecName: "kube-api-access-b8kx9") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "kube-api-access-b8kx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.100873 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-config-data" (OuterVolumeSpecName: "config-data") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.104129 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-scripts" (OuterVolumeSpecName: "scripts") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.103554 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.153456 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "95cd8e52-9048-40c0-8f62-b36f39433908" (UID: "95cd8e52-9048-40c0-8f62-b36f39433908"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.169844 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.170073 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.170164 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8kx9\" (UniqueName: \"kubernetes.io/projected/95cd8e52-9048-40c0-8f62-b36f39433908-kube-api-access-b8kx9\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.170224 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.170277 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95cd8e52-9048-40c0-8f62-b36f39433908-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.170330 4904 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/95cd8e52-9048-40c0-8f62-b36f39433908-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.770027 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-648f66fff6-bp2z7" event={"ID":"95cd8e52-9048-40c0-8f62-b36f39433908","Type":"ContainerDied","Data":"303d1a5169b64d2feb34450fc31456a99d81a964b0707806b0ad57735ce3bf66"} Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.770494 4904 scope.go:117] "RemoveContainer" containerID="11ceaf74ebc5a58e4025282426be09a7c7c72ef4a5297ca34829896518d140fa" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.770757 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-648f66fff6-bp2z7" Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.813681 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-648f66fff6-bp2z7"] Nov 21 15:02:07 crc kubenswrapper[4904]: I1121 15:02:07.824961 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-648f66fff6-bp2z7"] Nov 21 15:02:08 crc kubenswrapper[4904]: I1121 15:02:08.040102 4904 scope.go:117] "RemoveContainer" containerID="fc01bcb4218b7b9864118de827a26def8b52df4c425fa708adcd7dd3befa74dd" Nov 21 15:02:08 crc kubenswrapper[4904]: I1121 15:02:08.538938 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" path="/var/lib/kubelet/pods/95cd8e52-9048-40c0-8f62-b36f39433908/volumes" Nov 21 15:02:08 crc kubenswrapper[4904]: I1121 15:02:08.926189 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.844385 4904 generic.go:334] "Generic (PLEG): container finished" podID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerID="8adf41e439a686d3665e7ff1236c13f6f981fda0009b0f59c650ebbb53766d06" exitCode=137 Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.844495 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerDied","Data":"8adf41e439a686d3665e7ff1236c13f6f981fda0009b0f59c650ebbb53766d06"} Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.845304 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a117ac6e-4da2-489b-ae58-1eabb354721f","Type":"ContainerDied","Data":"ed72ccc79b8fd58f56e56bfc0948ed24b14ef55962756bf6d44b229d164a16f2"} Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.845328 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed72ccc79b8fd58f56e56bfc0948ed24b14ef55962756bf6d44b229d164a16f2" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.867366 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.916960 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-log-httpd\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917022 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-combined-ca-bundle\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917135 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8hnr\" (UniqueName: \"kubernetes.io/projected/a117ac6e-4da2-489b-ae58-1eabb354721f-kube-api-access-w8hnr\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917195 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-config-data\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917226 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-scripts\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917278 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-sg-core-conf-yaml\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917388 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-run-httpd\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.917538 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-ceilometer-tls-certs\") pod \"a117ac6e-4da2-489b-ae58-1eabb354721f\" (UID: \"a117ac6e-4da2-489b-ae58-1eabb354721f\") " Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.922053 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.924011 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.927049 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a117ac6e-4da2-489b-ae58-1eabb354721f-kube-api-access-w8hnr" (OuterVolumeSpecName: "kube-api-access-w8hnr") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "kube-api-access-w8hnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.931843 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-scripts" (OuterVolumeSpecName: "scripts") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:11 crc kubenswrapper[4904]: I1121 15:02:11.968749 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.020452 4904 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.020487 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8hnr\" (UniqueName: \"kubernetes.io/projected/a117ac6e-4da2-489b-ae58-1eabb354721f-kube-api-access-w8hnr\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.020498 4904 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-scripts\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.020508 4904 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.020516 4904 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a117ac6e-4da2-489b-ae58-1eabb354721f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.047747 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.085122 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.109903 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-config-data" (OuterVolumeSpecName: "config-data") pod "a117ac6e-4da2-489b-ae58-1eabb354721f" (UID: "a117ac6e-4da2-489b-ae58-1eabb354721f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.122790 4904 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.122834 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.122847 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a117ac6e-4da2-489b-ae58-1eabb354721f-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.856651 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.892868 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.906964 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.923238 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:02:12 crc kubenswrapper[4904]: E1121 15:02:12.923961 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-central-agent" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.923986 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-central-agent" Nov 21 15:02:12 crc kubenswrapper[4904]: E1121 15:02:12.924010 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924018 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" Nov 21 15:02:12 crc kubenswrapper[4904]: E1121 15:02:12.924047 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon-log" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924056 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon-log" Nov 21 15:02:12 crc kubenswrapper[4904]: E1121 15:02:12.924071 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="sg-core" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924078 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="sg-core" Nov 21 15:02:12 crc kubenswrapper[4904]: E1121 15:02:12.924097 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-notification-agent" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924106 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-notification-agent" Nov 21 15:02:12 crc kubenswrapper[4904]: E1121 15:02:12.924127 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="proxy-httpd" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924135 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="proxy-httpd" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924414 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-central-agent" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924443 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924457 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="proxy-httpd" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924468 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="sg-core" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924486 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cd8e52-9048-40c0-8f62-b36f39433908" containerName="horizon-log" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.924506 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" containerName="ceilometer-notification-agent" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.946528 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.946687 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.949536 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.949721 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 21 15:02:12 crc kubenswrapper[4904]: I1121 15:02:12.949834 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.043173 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.043614 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.043676 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cd84528-51ed-4df2-81cd-d0793668a01a-log-httpd\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.043709 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8h85\" (UniqueName: \"kubernetes.io/projected/7cd84528-51ed-4df2-81cd-d0793668a01a-kube-api-access-l8h85\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.043733 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-scripts\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.044094 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.044175 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cd84528-51ed-4df2-81cd-d0793668a01a-run-httpd\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.044310 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-config-data\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.146946 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.147014 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cd84528-51ed-4df2-81cd-d0793668a01a-log-httpd\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.147043 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8h85\" (UniqueName: \"kubernetes.io/projected/7cd84528-51ed-4df2-81cd-d0793668a01a-kube-api-access-l8h85\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.147068 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-scripts\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.147150 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.147727 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cd84528-51ed-4df2-81cd-d0793668a01a-log-httpd\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.148068 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cd84528-51ed-4df2-81cd-d0793668a01a-run-httpd\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.148137 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-config-data\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.148165 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.149482 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cd84528-51ed-4df2-81cd-d0793668a01a-run-httpd\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.152706 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-scripts\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.152979 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-config-data\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.153924 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.156257 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.163249 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cd84528-51ed-4df2-81cd-d0793668a01a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.165133 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8h85\" (UniqueName: \"kubernetes.io/projected/7cd84528-51ed-4df2-81cd-d0793668a01a-kube-api-access-l8h85\") pod \"ceilometer-0\" (UID: \"7cd84528-51ed-4df2-81cd-d0793668a01a\") " pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.264743 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.994230 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 21 15:02:13 crc kubenswrapper[4904]: I1121 15:02:13.997828 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:02:14 crc kubenswrapper[4904]: I1121 15:02:14.526340 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a117ac6e-4da2-489b-ae58-1eabb354721f" path="/var/lib/kubelet/pods/a117ac6e-4da2-489b-ae58-1eabb354721f/volumes" Nov 21 15:02:14 crc kubenswrapper[4904]: I1121 15:02:14.879694 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cd84528-51ed-4df2-81cd-d0793668a01a","Type":"ContainerStarted","Data":"c7dd369c2a210b9029ac6bdbe9b8433bb95cbb954dd4c3d513052e72d2211a0b"} Nov 21 15:02:14 crc kubenswrapper[4904]: I1121 15:02:14.880255 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cd84528-51ed-4df2-81cd-d0793668a01a","Type":"ContainerStarted","Data":"44a782058e51c17403583fafc6dcf86c7d33717d4c713cafbd6bc50beab2948a"} Nov 21 15:02:15 crc kubenswrapper[4904]: I1121 15:02:15.896206 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cd84528-51ed-4df2-81cd-d0793668a01a","Type":"ContainerStarted","Data":"db4e826c3a18f2c2aaa20eb2d74589b17430f5b6068b3bfb901475b719a62458"} Nov 21 15:02:16 crc kubenswrapper[4904]: I1121 15:02:16.070844 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Nov 21 15:02:16 crc kubenswrapper[4904]: I1121 15:02:16.913759 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cd84528-51ed-4df2-81cd-d0793668a01a","Type":"ContainerStarted","Data":"a2835b345d727df637d8a5ca0e01d34bffc79fd094612594b7d522336228166b"} Nov 21 15:02:18 crc kubenswrapper[4904]: I1121 15:02:18.941453 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cd84528-51ed-4df2-81cd-d0793668a01a","Type":"ContainerStarted","Data":"806a2973158eec1da4da415ff26899251f3bdd94e2277da5f70f949b80377cd5"} Nov 21 15:02:18 crc kubenswrapper[4904]: I1121 15:02:18.942139 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 21 15:02:18 crc kubenswrapper[4904]: I1121 15:02:18.966205 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.82911345 podStartE2EDuration="6.966175098s" podCreationTimestamp="2025-11-21 15:02:12 +0000 UTC" firstStartedPulling="2025-11-21 15:02:13.997582236 +0000 UTC m=+5408.119114788" lastFinishedPulling="2025-11-21 15:02:18.134643884 +0000 UTC m=+5412.256176436" observedRunningTime="2025-11-21 15:02:18.961980146 +0000 UTC m=+5413.083512698" watchObservedRunningTime="2025-11-21 15:02:18.966175098 +0000 UTC m=+5413.087707640" Nov 21 15:02:28 crc kubenswrapper[4904]: I1121 15:02:28.113916 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:02:28 crc kubenswrapper[4904]: I1121 15:02:28.114589 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:02:43 crc kubenswrapper[4904]: I1121 15:02:43.276087 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 21 15:02:58 crc kubenswrapper[4904]: I1121 15:02:58.113750 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:02:58 crc kubenswrapper[4904]: I1121 15:02:58.114645 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.113811 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.114902 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.115005 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.116820 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25a1d25832a10a210874215d77da23ed0a585352adebb0db83f6cfc5818d8e6d"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.116950 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://25a1d25832a10a210874215d77da23ed0a585352adebb0db83f6cfc5818d8e6d" gracePeriod=600 Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.852994 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="25a1d25832a10a210874215d77da23ed0a585352adebb0db83f6cfc5818d8e6d" exitCode=0 Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.853056 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"25a1d25832a10a210874215d77da23ed0a585352adebb0db83f6cfc5818d8e6d"} Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.853401 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07"} Nov 21 15:03:28 crc kubenswrapper[4904]: I1121 15:03:28.853422 4904 scope.go:117] "RemoveContainer" containerID="cf647910853d9568c62342a47791372f356e1352e1c725ef2689d9b0c487b988" Nov 21 15:05:28 crc kubenswrapper[4904]: I1121 15:05:28.113425 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:05:28 crc kubenswrapper[4904]: I1121 15:05:28.114140 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:05:58 crc kubenswrapper[4904]: I1121 15:05:58.113968 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:05:58 crc kubenswrapper[4904]: I1121 15:05:58.114714 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:06:28 crc kubenswrapper[4904]: I1121 15:06:28.114122 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:06:28 crc kubenswrapper[4904]: I1121 15:06:28.114705 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:06:28 crc kubenswrapper[4904]: I1121 15:06:28.114749 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:06:28 crc kubenswrapper[4904]: I1121 15:06:28.115628 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:06:28 crc kubenswrapper[4904]: I1121 15:06:28.115782 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" gracePeriod=600 Nov 21 15:06:28 crc kubenswrapper[4904]: E1121 15:06:28.250789 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:06:29 crc kubenswrapper[4904]: I1121 15:06:29.182579 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" exitCode=0 Nov 21 15:06:29 crc kubenswrapper[4904]: I1121 15:06:29.182640 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07"} Nov 21 15:06:29 crc kubenswrapper[4904]: I1121 15:06:29.182744 4904 scope.go:117] "RemoveContainer" containerID="25a1d25832a10a210874215d77da23ed0a585352adebb0db83f6cfc5818d8e6d" Nov 21 15:06:29 crc kubenswrapper[4904]: I1121 15:06:29.188704 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:06:29 crc kubenswrapper[4904]: E1121 15:06:29.189637 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:06:41 crc kubenswrapper[4904]: I1121 15:06:41.514059 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:06:41 crc kubenswrapper[4904]: E1121 15:06:41.515223 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:06:55 crc kubenswrapper[4904]: I1121 15:06:55.514463 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:06:55 crc kubenswrapper[4904]: E1121 15:06:55.515361 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:07:09 crc kubenswrapper[4904]: I1121 15:07:09.514219 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:07:09 crc kubenswrapper[4904]: E1121 15:07:09.516619 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:07:21 crc kubenswrapper[4904]: I1121 15:07:21.513706 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:07:21 crc kubenswrapper[4904]: E1121 15:07:21.514642 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:07:36 crc kubenswrapper[4904]: I1121 15:07:36.520478 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:07:36 crc kubenswrapper[4904]: E1121 15:07:36.521753 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:07:48 crc kubenswrapper[4904]: I1121 15:07:48.514167 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:07:48 crc kubenswrapper[4904]: E1121 15:07:48.516660 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:08:00 crc kubenswrapper[4904]: I1121 15:08:00.513273 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:08:00 crc kubenswrapper[4904]: E1121 15:08:00.514242 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:08:11 crc kubenswrapper[4904]: I1121 15:08:11.514321 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:08:11 crc kubenswrapper[4904]: E1121 15:08:11.516083 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:08:26 crc kubenswrapper[4904]: I1121 15:08:26.528582 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:08:26 crc kubenswrapper[4904]: E1121 15:08:26.532500 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:08:33 crc kubenswrapper[4904]: I1121 15:08:33.691873 4904 scope.go:117] "RemoveContainer" containerID="f4ef32037e6dd864e3c30fb9a9f05c0c8692a5ad3a4c757fad92a764d419a784" Nov 21 15:08:33 crc kubenswrapper[4904]: I1121 15:08:33.719168 4904 scope.go:117] "RemoveContainer" containerID="bdfd0fbfee69728eb81dbc2022f6ccd9d6ec7dd8f4e50ae1a3f64010ce38e46f" Nov 21 15:08:33 crc kubenswrapper[4904]: I1121 15:08:33.742228 4904 scope.go:117] "RemoveContainer" containerID="4e8d832bc36bfd0249073c8eeba89d15d140cd4a14570e1956a5933e153a6ef2" Nov 21 15:08:33 crc kubenswrapper[4904]: I1121 15:08:33.770294 4904 scope.go:117] "RemoveContainer" containerID="8adf41e439a686d3665e7ff1236c13f6f981fda0009b0f59c650ebbb53766d06" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.568252 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p97tf"] Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.574674 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.598511 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p97tf"] Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.716207 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-utilities\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.716350 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7tk\" (UniqueName: \"kubernetes.io/projected/0c7a06b2-103e-40db-b8bc-0cf1d16315db-kube-api-access-lk7tk\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.716408 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-catalog-content\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.818811 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7tk\" (UniqueName: \"kubernetes.io/projected/0c7a06b2-103e-40db-b8bc-0cf1d16315db-kube-api-access-lk7tk\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.818901 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-catalog-content\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.819043 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-utilities\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.819601 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-utilities\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.819955 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-catalog-content\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.855827 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7tk\" (UniqueName: \"kubernetes.io/projected/0c7a06b2-103e-40db-b8bc-0cf1d16315db-kube-api-access-lk7tk\") pod \"community-operators-p97tf\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:39 crc kubenswrapper[4904]: I1121 15:08:39.902460 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:08:40 crc kubenswrapper[4904]: I1121 15:08:40.469883 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p97tf"] Nov 21 15:08:40 crc kubenswrapper[4904]: I1121 15:08:40.514117 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:08:40 crc kubenswrapper[4904]: E1121 15:08:40.514445 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:08:40 crc kubenswrapper[4904]: I1121 15:08:40.849977 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerStarted","Data":"a271539a5300113fc0b8d26c39499635e7bb02d7d51d5228ef56efb6bff65d32"} Nov 21 15:08:41 crc kubenswrapper[4904]: I1121 15:08:41.867091 4904 generic.go:334] "Generic (PLEG): container finished" podID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerID="ed5f2d0f995bc643d166c42f3f4d29c0ec362ac23bd1fa5070f45a7c9ebf4f7c" exitCode=0 Nov 21 15:08:41 crc kubenswrapper[4904]: I1121 15:08:41.867721 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerDied","Data":"ed5f2d0f995bc643d166c42f3f4d29c0ec362ac23bd1fa5070f45a7c9ebf4f7c"} Nov 21 15:08:41 crc kubenswrapper[4904]: I1121 15:08:41.871294 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:08:45 crc kubenswrapper[4904]: I1121 15:08:45.917101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerStarted","Data":"fad23899265ac90e58afed5212997b640fadde6b22ba7e24854ad2a47813a108"} Nov 21 15:08:51 crc kubenswrapper[4904]: I1121 15:08:51.513439 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:08:51 crc kubenswrapper[4904]: E1121 15:08:51.514672 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:08:56 crc kubenswrapper[4904]: I1121 15:08:56.044618 4904 generic.go:334] "Generic (PLEG): container finished" podID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerID="fad23899265ac90e58afed5212997b640fadde6b22ba7e24854ad2a47813a108" exitCode=0 Nov 21 15:08:56 crc kubenswrapper[4904]: I1121 15:08:56.044755 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerDied","Data":"fad23899265ac90e58afed5212997b640fadde6b22ba7e24854ad2a47813a108"} Nov 21 15:09:03 crc kubenswrapper[4904]: I1121 15:09:03.129101 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerStarted","Data":"c44bb1719e88494b4da3abb64731c7fa4c2a80cc5a65bdce6a3735985d8018c0"} Nov 21 15:09:05 crc kubenswrapper[4904]: I1121 15:09:05.513277 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:09:05 crc kubenswrapper[4904]: E1121 15:09:05.513905 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:09:09 crc kubenswrapper[4904]: I1121 15:09:09.903270 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:09:09 crc kubenswrapper[4904]: I1121 15:09:09.904157 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:09:09 crc kubenswrapper[4904]: I1121 15:09:09.955123 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:09:09 crc kubenswrapper[4904]: I1121 15:09:09.988859 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p97tf" podStartSLOduration=10.691915845 podStartE2EDuration="30.988834424s" podCreationTimestamp="2025-11-21 15:08:39 +0000 UTC" firstStartedPulling="2025-11-21 15:08:41.870888425 +0000 UTC m=+5795.992420997" lastFinishedPulling="2025-11-21 15:09:02.167807024 +0000 UTC m=+5816.289339576" observedRunningTime="2025-11-21 15:09:03.149272185 +0000 UTC m=+5817.270804757" watchObservedRunningTime="2025-11-21 15:09:09.988834424 +0000 UTC m=+5824.110366986" Nov 21 15:09:10 crc kubenswrapper[4904]: I1121 15:09:10.302616 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:09:10 crc kubenswrapper[4904]: I1121 15:09:10.785775 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p97tf"] Nov 21 15:09:12 crc kubenswrapper[4904]: I1121 15:09:12.266667 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p97tf" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="registry-server" containerID="cri-o://c44bb1719e88494b4da3abb64731c7fa4c2a80cc5a65bdce6a3735985d8018c0" gracePeriod=2 Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.292066 4904 generic.go:334] "Generic (PLEG): container finished" podID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerID="c44bb1719e88494b4da3abb64731c7fa4c2a80cc5a65bdce6a3735985d8018c0" exitCode=0 Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.292165 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerDied","Data":"c44bb1719e88494b4da3abb64731c7fa4c2a80cc5a65bdce6a3735985d8018c0"} Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.423080 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.493004 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-utilities\") pod \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.493407 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-catalog-content\") pod \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.493524 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk7tk\" (UniqueName: \"kubernetes.io/projected/0c7a06b2-103e-40db-b8bc-0cf1d16315db-kube-api-access-lk7tk\") pod \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\" (UID: \"0c7a06b2-103e-40db-b8bc-0cf1d16315db\") " Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.494211 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-utilities" (OuterVolumeSpecName: "utilities") pod "0c7a06b2-103e-40db-b8bc-0cf1d16315db" (UID: "0c7a06b2-103e-40db-b8bc-0cf1d16315db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.494599 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.537959 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7a06b2-103e-40db-b8bc-0cf1d16315db-kube-api-access-lk7tk" (OuterVolumeSpecName: "kube-api-access-lk7tk") pod "0c7a06b2-103e-40db-b8bc-0cf1d16315db" (UID: "0c7a06b2-103e-40db-b8bc-0cf1d16315db"). InnerVolumeSpecName "kube-api-access-lk7tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.587088 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c7a06b2-103e-40db-b8bc-0cf1d16315db" (UID: "0c7a06b2-103e-40db-b8bc-0cf1d16315db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.597807 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c7a06b2-103e-40db-b8bc-0cf1d16315db-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:13 crc kubenswrapper[4904]: I1121 15:09:13.597853 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk7tk\" (UniqueName: \"kubernetes.io/projected/0c7a06b2-103e-40db-b8bc-0cf1d16315db-kube-api-access-lk7tk\") on node \"crc\" DevicePath \"\"" Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.310080 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p97tf" event={"ID":"0c7a06b2-103e-40db-b8bc-0cf1d16315db","Type":"ContainerDied","Data":"a271539a5300113fc0b8d26c39499635e7bb02d7d51d5228ef56efb6bff65d32"} Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.310779 4904 scope.go:117] "RemoveContainer" containerID="c44bb1719e88494b4da3abb64731c7fa4c2a80cc5a65bdce6a3735985d8018c0" Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.310113 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p97tf" Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.351159 4904 scope.go:117] "RemoveContainer" containerID="fad23899265ac90e58afed5212997b640fadde6b22ba7e24854ad2a47813a108" Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.359791 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p97tf"] Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.373067 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p97tf"] Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.382220 4904 scope.go:117] "RemoveContainer" containerID="ed5f2d0f995bc643d166c42f3f4d29c0ec362ac23bd1fa5070f45a7c9ebf4f7c" Nov 21 15:09:14 crc kubenswrapper[4904]: I1121 15:09:14.531745 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" path="/var/lib/kubelet/pods/0c7a06b2-103e-40db-b8bc-0cf1d16315db/volumes" Nov 21 15:09:16 crc kubenswrapper[4904]: I1121 15:09:16.523375 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:09:16 crc kubenswrapper[4904]: E1121 15:09:16.530951 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:09:30 crc kubenswrapper[4904]: I1121 15:09:30.513166 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:09:30 crc kubenswrapper[4904]: E1121 15:09:30.514179 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:09:45 crc kubenswrapper[4904]: I1121 15:09:45.513564 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:09:45 crc kubenswrapper[4904]: E1121 15:09:45.514501 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:10:00 crc kubenswrapper[4904]: I1121 15:10:00.513202 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:10:00 crc kubenswrapper[4904]: E1121 15:10:00.514309 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:10:13 crc kubenswrapper[4904]: I1121 15:10:13.513283 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:10:13 crc kubenswrapper[4904]: E1121 15:10:13.514267 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:10:24 crc kubenswrapper[4904]: I1121 15:10:24.514123 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:10:24 crc kubenswrapper[4904]: E1121 15:10:24.515053 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:10:36 crc kubenswrapper[4904]: I1121 15:10:36.522565 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:10:36 crc kubenswrapper[4904]: E1121 15:10:36.523966 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:10:40 crc kubenswrapper[4904]: I1121 15:10:40.063298 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-hjx5b"] Nov 21 15:10:40 crc kubenswrapper[4904]: I1121 15:10:40.075920 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-d760-account-create-z7s4m"] Nov 21 15:10:40 crc kubenswrapper[4904]: I1121 15:10:40.089629 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-d760-account-create-z7s4m"] Nov 21 15:10:40 crc kubenswrapper[4904]: I1121 15:10:40.102178 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-hjx5b"] Nov 21 15:10:40 crc kubenswrapper[4904]: I1121 15:10:40.535525 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5" path="/var/lib/kubelet/pods/594f08d9-65a9-48e0-b1b2-5bcc37d1e4f5/volumes" Nov 21 15:10:40 crc kubenswrapper[4904]: I1121 15:10:40.538306 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92872ce6-2457-4262-9960-71617702e973" path="/var/lib/kubelet/pods/92872ce6-2457-4262-9960-71617702e973/volumes" Nov 21 15:10:48 crc kubenswrapper[4904]: I1121 15:10:48.516142 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:10:48 crc kubenswrapper[4904]: E1121 15:10:48.517159 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:10:59 crc kubenswrapper[4904]: I1121 15:10:59.513476 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:10:59 crc kubenswrapper[4904]: E1121 15:10:59.514375 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:11:12 crc kubenswrapper[4904]: I1121 15:11:12.516691 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:11:12 crc kubenswrapper[4904]: E1121 15:11:12.517717 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:11:18 crc kubenswrapper[4904]: I1121 15:11:18.053082 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-hk6xj"] Nov 21 15:11:18 crc kubenswrapper[4904]: I1121 15:11:18.071793 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-hk6xj"] Nov 21 15:11:18 crc kubenswrapper[4904]: I1121 15:11:18.529991 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21abe39c-baf1-4382-b041-ec03d59a99b5" path="/var/lib/kubelet/pods/21abe39c-baf1-4382-b041-ec03d59a99b5/volumes" Nov 21 15:11:25 crc kubenswrapper[4904]: I1121 15:11:25.513140 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:11:25 crc kubenswrapper[4904]: E1121 15:11:25.514142 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:11:33 crc kubenswrapper[4904]: I1121 15:11:33.913083 4904 scope.go:117] "RemoveContainer" containerID="ea84e43aeed9826334bac83e3ab307b54c70c718736ac8ec53510655f05e250d" Nov 21 15:11:33 crc kubenswrapper[4904]: I1121 15:11:33.962550 4904 scope.go:117] "RemoveContainer" containerID="1eb033ad46e6fd4cb0676b9fd68869c9108de90864b6908a7b0183a4a7aa12cb" Nov 21 15:11:34 crc kubenswrapper[4904]: I1121 15:11:34.013475 4904 scope.go:117] "RemoveContainer" containerID="7ab5f5c29cb0c108eb4a44553672ecc0393b3ee693e8e848113609703df238a4" Nov 21 15:11:39 crc kubenswrapper[4904]: I1121 15:11:39.513851 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:11:40 crc kubenswrapper[4904]: I1121 15:11:40.116582 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"710cfcaf356741555016c5de006a3d484493bacb33070045a853663ef14fb9ed"} Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.128481 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vpf25"] Nov 21 15:12:21 crc kubenswrapper[4904]: E1121 15:12:21.130058 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="extract-content" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.130079 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="extract-content" Nov 21 15:12:21 crc kubenswrapper[4904]: E1121 15:12:21.130096 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="extract-utilities" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.130105 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="extract-utilities" Nov 21 15:12:21 crc kubenswrapper[4904]: E1121 15:12:21.130179 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="registry-server" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.130188 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="registry-server" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.130482 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7a06b2-103e-40db-b8bc-0cf1d16315db" containerName="registry-server" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.132961 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.140908 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpf25"] Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.309010 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr7sg\" (UniqueName: \"kubernetes.io/projected/928394e1-ae5d-4c11-bbe3-c514b3706720-kube-api-access-rr7sg\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.309081 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-utilities\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.309114 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-catalog-content\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.411369 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr7sg\" (UniqueName: \"kubernetes.io/projected/928394e1-ae5d-4c11-bbe3-c514b3706720-kube-api-access-rr7sg\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.411443 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-utilities\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.411467 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-catalog-content\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.411953 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-catalog-content\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.412513 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-utilities\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.436335 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr7sg\" (UniqueName: \"kubernetes.io/projected/928394e1-ae5d-4c11-bbe3-c514b3706720-kube-api-access-rr7sg\") pod \"certified-operators-vpf25\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:21 crc kubenswrapper[4904]: I1121 15:12:21.459070 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:22 crc kubenswrapper[4904]: I1121 15:12:22.059881 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpf25"] Nov 21 15:12:22 crc kubenswrapper[4904]: I1121 15:12:22.694687 4904 generic.go:334] "Generic (PLEG): container finished" podID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerID="f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690" exitCode=0 Nov 21 15:12:22 crc kubenswrapper[4904]: I1121 15:12:22.695201 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerDied","Data":"f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690"} Nov 21 15:12:22 crc kubenswrapper[4904]: I1121 15:12:22.695288 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerStarted","Data":"c2666937e7a204785f4d8117620ed44c6dee6785f7838197b3d69282bc48ac2f"} Nov 21 15:12:24 crc kubenswrapper[4904]: I1121 15:12:24.719168 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerStarted","Data":"abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6"} Nov 21 15:12:25 crc kubenswrapper[4904]: I1121 15:12:25.733262 4904 generic.go:334] "Generic (PLEG): container finished" podID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerID="abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6" exitCode=0 Nov 21 15:12:25 crc kubenswrapper[4904]: I1121 15:12:25.733320 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerDied","Data":"abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6"} Nov 21 15:12:26 crc kubenswrapper[4904]: I1121 15:12:26.754171 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerStarted","Data":"7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356"} Nov 21 15:12:26 crc kubenswrapper[4904]: I1121 15:12:26.780841 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vpf25" podStartSLOduration=2.349755601 podStartE2EDuration="5.780813045s" podCreationTimestamp="2025-11-21 15:12:21 +0000 UTC" firstStartedPulling="2025-11-21 15:12:22.697933994 +0000 UTC m=+6016.819466546" lastFinishedPulling="2025-11-21 15:12:26.128991438 +0000 UTC m=+6020.250523990" observedRunningTime="2025-11-21 15:12:26.775402223 +0000 UTC m=+6020.896934815" watchObservedRunningTime="2025-11-21 15:12:26.780813045 +0000 UTC m=+6020.902345597" Nov 21 15:12:31 crc kubenswrapper[4904]: I1121 15:12:31.459413 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:31 crc kubenswrapper[4904]: I1121 15:12:31.460286 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:31 crc kubenswrapper[4904]: I1121 15:12:31.527428 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:31 crc kubenswrapper[4904]: I1121 15:12:31.875327 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:31 crc kubenswrapper[4904]: I1121 15:12:31.938439 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpf25"] Nov 21 15:12:33 crc kubenswrapper[4904]: I1121 15:12:33.847905 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vpf25" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="registry-server" containerID="cri-o://7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356" gracePeriod=2 Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.567639 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.686606 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-catalog-content\") pod \"928394e1-ae5d-4c11-bbe3-c514b3706720\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.687176 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-utilities\") pod \"928394e1-ae5d-4c11-bbe3-c514b3706720\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.687497 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr7sg\" (UniqueName: \"kubernetes.io/projected/928394e1-ae5d-4c11-bbe3-c514b3706720-kube-api-access-rr7sg\") pod \"928394e1-ae5d-4c11-bbe3-c514b3706720\" (UID: \"928394e1-ae5d-4c11-bbe3-c514b3706720\") " Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.687849 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-utilities" (OuterVolumeSpecName: "utilities") pod "928394e1-ae5d-4c11-bbe3-c514b3706720" (UID: "928394e1-ae5d-4c11-bbe3-c514b3706720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.688955 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.694022 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928394e1-ae5d-4c11-bbe3-c514b3706720-kube-api-access-rr7sg" (OuterVolumeSpecName: "kube-api-access-rr7sg") pod "928394e1-ae5d-4c11-bbe3-c514b3706720" (UID: "928394e1-ae5d-4c11-bbe3-c514b3706720"). InnerVolumeSpecName "kube-api-access-rr7sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.773268 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "928394e1-ae5d-4c11-bbe3-c514b3706720" (UID: "928394e1-ae5d-4c11-bbe3-c514b3706720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.791459 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/928394e1-ae5d-4c11-bbe3-c514b3706720-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.791507 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr7sg\" (UniqueName: \"kubernetes.io/projected/928394e1-ae5d-4c11-bbe3-c514b3706720-kube-api-access-rr7sg\") on node \"crc\" DevicePath \"\"" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.889749 4904 generic.go:334] "Generic (PLEG): container finished" podID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerID="7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356" exitCode=0 Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.889800 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerDied","Data":"7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356"} Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.889829 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpf25" event={"ID":"928394e1-ae5d-4c11-bbe3-c514b3706720","Type":"ContainerDied","Data":"c2666937e7a204785f4d8117620ed44c6dee6785f7838197b3d69282bc48ac2f"} Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.889845 4904 scope.go:117] "RemoveContainer" containerID="7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.889843 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpf25" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.921429 4904 scope.go:117] "RemoveContainer" containerID="abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6" Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.937765 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpf25"] Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.946924 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vpf25"] Nov 21 15:12:34 crc kubenswrapper[4904]: I1121 15:12:34.953765 4904 scope.go:117] "RemoveContainer" containerID="f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690" Nov 21 15:12:35 crc kubenswrapper[4904]: I1121 15:12:35.013602 4904 scope.go:117] "RemoveContainer" containerID="7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356" Nov 21 15:12:35 crc kubenswrapper[4904]: E1121 15:12:35.014068 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356\": container with ID starting with 7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356 not found: ID does not exist" containerID="7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356" Nov 21 15:12:35 crc kubenswrapper[4904]: I1121 15:12:35.014130 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356"} err="failed to get container status \"7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356\": rpc error: code = NotFound desc = could not find container \"7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356\": container with ID starting with 7fea2cb0e4177c0124c8436972e28ff46e73d6699bb5b20999514eaf35e3d356 not found: ID does not exist" Nov 21 15:12:35 crc kubenswrapper[4904]: I1121 15:12:35.014184 4904 scope.go:117] "RemoveContainer" containerID="abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6" Nov 21 15:12:35 crc kubenswrapper[4904]: E1121 15:12:35.014829 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6\": container with ID starting with abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6 not found: ID does not exist" containerID="abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6" Nov 21 15:12:35 crc kubenswrapper[4904]: I1121 15:12:35.014870 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6"} err="failed to get container status \"abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6\": rpc error: code = NotFound desc = could not find container \"abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6\": container with ID starting with abfd0e6f24fcce46b44bc40d9cbcaf6519a1e9d8fe6f3792bb6fb5ac142184c6 not found: ID does not exist" Nov 21 15:12:35 crc kubenswrapper[4904]: I1121 15:12:35.014895 4904 scope.go:117] "RemoveContainer" containerID="f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690" Nov 21 15:12:35 crc kubenswrapper[4904]: E1121 15:12:35.015205 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690\": container with ID starting with f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690 not found: ID does not exist" containerID="f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690" Nov 21 15:12:35 crc kubenswrapper[4904]: I1121 15:12:35.015238 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690"} err="failed to get container status \"f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690\": rpc error: code = NotFound desc = could not find container \"f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690\": container with ID starting with f608b3104a20daa4982bf61ea4b52ec51e80a6eed6fd9fc04e348877569bf690 not found: ID does not exist" Nov 21 15:12:36 crc kubenswrapper[4904]: I1121 15:12:36.526209 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" path="/var/lib/kubelet/pods/928394e1-ae5d-4c11-bbe3-c514b3706720/volumes" Nov 21 15:13:58 crc kubenswrapper[4904]: I1121 15:13:58.113347 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:13:58 crc kubenswrapper[4904]: I1121 15:13:58.114295 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:14:28 crc kubenswrapper[4904]: I1121 15:14:28.113566 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:14:28 crc kubenswrapper[4904]: I1121 15:14:28.114242 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.114299 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.115070 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.115139 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.116410 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"710cfcaf356741555016c5de006a3d484493bacb33070045a853663ef14fb9ed"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.116490 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://710cfcaf356741555016c5de006a3d484493bacb33070045a853663ef14fb9ed" gracePeriod=600 Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.756693 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fhbb6"] Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.756979 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="710cfcaf356741555016c5de006a3d484493bacb33070045a853663ef14fb9ed" exitCode=0 Nov 21 15:14:58 crc kubenswrapper[4904]: E1121 15:14:58.758190 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="extract-utilities" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.758212 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="extract-utilities" Nov 21 15:14:58 crc kubenswrapper[4904]: E1121 15:14:58.758280 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="registry-server" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.758289 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="registry-server" Nov 21 15:14:58 crc kubenswrapper[4904]: E1121 15:14:58.758299 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="extract-content" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.758305 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="extract-content" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.758605 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="928394e1-ae5d-4c11-bbe3-c514b3706720" containerName="registry-server" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.761853 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"710cfcaf356741555016c5de006a3d484493bacb33070045a853663ef14fb9ed"} Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.761903 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb"} Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.761925 4904 scope.go:117] "RemoveContainer" containerID="e4e910c5be4383e5927b1a974cec938c9d90b478a592cc78c6eca1b790735e07" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.762175 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.777247 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhbb6"] Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.864259 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-utilities\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.865149 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-catalog-content\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.865562 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsbxw\" (UniqueName: \"kubernetes.io/projected/7acd54e9-b4de-4b50-b562-247b0d1906fd-kube-api-access-vsbxw\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.968057 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-catalog-content\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.968535 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-catalog-content\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.968858 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsbxw\" (UniqueName: \"kubernetes.io/projected/7acd54e9-b4de-4b50-b562-247b0d1906fd-kube-api-access-vsbxw\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.968912 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-utilities\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.969242 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-utilities\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:58 crc kubenswrapper[4904]: I1121 15:14:58.996040 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsbxw\" (UniqueName: \"kubernetes.io/projected/7acd54e9-b4de-4b50-b562-247b0d1906fd-kube-api-access-vsbxw\") pod \"redhat-marketplace-fhbb6\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.091412 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.152407 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtvcz"] Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.156097 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.206149 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtvcz"] Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.293168 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-utilities\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.296678 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55vw\" (UniqueName: \"kubernetes.io/projected/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-kube-api-access-s55vw\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.296787 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-catalog-content\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.404137 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-utilities\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.404241 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55vw\" (UniqueName: \"kubernetes.io/projected/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-kube-api-access-s55vw\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.404283 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-catalog-content\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.406941 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-catalog-content\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.407354 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-utilities\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.450602 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55vw\" (UniqueName: \"kubernetes.io/projected/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-kube-api-access-s55vw\") pod \"redhat-operators-xtvcz\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.579091 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.733095 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhbb6"] Nov 21 15:14:59 crc kubenswrapper[4904]: I1121 15:14:59.797848 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerStarted","Data":"7c6751acef5f5b0c101802eefad5d7e542e2617e77274306d5cc866750947b37"} Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.197570 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtvcz"] Nov 21 15:15:00 crc kubenswrapper[4904]: W1121 15:15:00.198214 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2ad9cf_ebeb_4fac_ae58_b90dba1534f8.slice/crio-d225ec16d188ee0c8c60fb89603f0b46fd8e60f3a85c52707b0c0c18d6fbf82f WatchSource:0}: Error finding container d225ec16d188ee0c8c60fb89603f0b46fd8e60f3a85c52707b0c0c18d6fbf82f: Status 404 returned error can't find the container with id d225ec16d188ee0c8c60fb89603f0b46fd8e60f3a85c52707b0c0c18d6fbf82f Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.214177 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz"] Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.216859 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.221044 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.221751 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.227162 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz"] Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.335750 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgt94\" (UniqueName: \"kubernetes.io/projected/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-kube-api-access-bgt94\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.336449 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-config-volume\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.336805 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-secret-volume\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.439988 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-config-volume\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.440160 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-secret-volume\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.440354 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgt94\" (UniqueName: \"kubernetes.io/projected/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-kube-api-access-bgt94\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.441275 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-config-volume\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.449884 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-secret-volume\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.462936 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgt94\" (UniqueName: \"kubernetes.io/projected/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-kube-api-access-bgt94\") pod \"collect-profiles-29395635-whrcz\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.547558 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.861118 4904 generic.go:334] "Generic (PLEG): container finished" podID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerID="9a2b10caf92d5a1d83c9518527463dd5d4215b3e9b0a1330389393f7d1f11cbf" exitCode=0 Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.861604 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerDied","Data":"9a2b10caf92d5a1d83c9518527463dd5d4215b3e9b0a1330389393f7d1f11cbf"} Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.861647 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerStarted","Data":"d225ec16d188ee0c8c60fb89603f0b46fd8e60f3a85c52707b0c0c18d6fbf82f"} Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.874443 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.878872 4904 generic.go:334] "Generic (PLEG): container finished" podID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerID="6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe" exitCode=0 Nov 21 15:15:00 crc kubenswrapper[4904]: I1121 15:15:00.878998 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerDied","Data":"6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe"} Nov 21 15:15:01 crc kubenswrapper[4904]: I1121 15:15:01.230604 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz"] Nov 21 15:15:01 crc kubenswrapper[4904]: W1121 15:15:01.237705 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d7c9133_5ded_4a51_bc40_7d0f166fe17b.slice/crio-32efc8fa9dcbe07b0ad20aeb20e753bed04d08397e2154403aa53e5e3c54ecd9 WatchSource:0}: Error finding container 32efc8fa9dcbe07b0ad20aeb20e753bed04d08397e2154403aa53e5e3c54ecd9: Status 404 returned error can't find the container with id 32efc8fa9dcbe07b0ad20aeb20e753bed04d08397e2154403aa53e5e3c54ecd9 Nov 21 15:15:01 crc kubenswrapper[4904]: I1121 15:15:01.902054 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerStarted","Data":"8ac8feae9df59e3c33a420cc162ff07ed2afc071667fcaf7ea59ce3d5672e09c"} Nov 21 15:15:01 crc kubenswrapper[4904]: I1121 15:15:01.926014 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerStarted","Data":"d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6"} Nov 21 15:15:01 crc kubenswrapper[4904]: I1121 15:15:01.938399 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" event={"ID":"9d7c9133-5ded-4a51-bc40-7d0f166fe17b","Type":"ContainerStarted","Data":"9dc73a4c83b63eb01dbefd1789a8e4d7f3cbabd5029beae8a1d57c37f68e10fc"} Nov 21 15:15:01 crc kubenswrapper[4904]: I1121 15:15:01.938462 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" event={"ID":"9d7c9133-5ded-4a51-bc40-7d0f166fe17b","Type":"ContainerStarted","Data":"32efc8fa9dcbe07b0ad20aeb20e753bed04d08397e2154403aa53e5e3c54ecd9"} Nov 21 15:15:02 crc kubenswrapper[4904]: I1121 15:15:02.014930 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" podStartSLOduration=2.014903693 podStartE2EDuration="2.014903693s" podCreationTimestamp="2025-11-21 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:15:01.992852737 +0000 UTC m=+6176.114385299" watchObservedRunningTime="2025-11-21 15:15:02.014903693 +0000 UTC m=+6176.136436245" Nov 21 15:15:02 crc kubenswrapper[4904]: I1121 15:15:02.950573 4904 generic.go:334] "Generic (PLEG): container finished" podID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerID="d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6" exitCode=0 Nov 21 15:15:02 crc kubenswrapper[4904]: I1121 15:15:02.950616 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerDied","Data":"d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6"} Nov 21 15:15:02 crc kubenswrapper[4904]: I1121 15:15:02.953024 4904 generic.go:334] "Generic (PLEG): container finished" podID="9d7c9133-5ded-4a51-bc40-7d0f166fe17b" containerID="9dc73a4c83b63eb01dbefd1789a8e4d7f3cbabd5029beae8a1d57c37f68e10fc" exitCode=0 Nov 21 15:15:02 crc kubenswrapper[4904]: I1121 15:15:02.953795 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" event={"ID":"9d7c9133-5ded-4a51-bc40-7d0f166fe17b","Type":"ContainerDied","Data":"9dc73a4c83b63eb01dbefd1789a8e4d7f3cbabd5029beae8a1d57c37f68e10fc"} Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.417585 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.459910 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-secret-volume\") pod \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.460146 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgt94\" (UniqueName: \"kubernetes.io/projected/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-kube-api-access-bgt94\") pod \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.460238 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-config-volume\") pod \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\" (UID: \"9d7c9133-5ded-4a51-bc40-7d0f166fe17b\") " Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.461734 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-config-volume" (OuterVolumeSpecName: "config-volume") pod "9d7c9133-5ded-4a51-bc40-7d0f166fe17b" (UID: "9d7c9133-5ded-4a51-bc40-7d0f166fe17b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.468566 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9d7c9133-5ded-4a51-bc40-7d0f166fe17b" (UID: "9d7c9133-5ded-4a51-bc40-7d0f166fe17b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.468741 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-kube-api-access-bgt94" (OuterVolumeSpecName: "kube-api-access-bgt94") pod "9d7c9133-5ded-4a51-bc40-7d0f166fe17b" (UID: "9d7c9133-5ded-4a51-bc40-7d0f166fe17b"). InnerVolumeSpecName "kube-api-access-bgt94". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.563889 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgt94\" (UniqueName: \"kubernetes.io/projected/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-kube-api-access-bgt94\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.564311 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.564393 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9d7c9133-5ded-4a51-bc40-7d0f166fe17b-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.988268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerStarted","Data":"3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5"} Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.991390 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" event={"ID":"9d7c9133-5ded-4a51-bc40-7d0f166fe17b","Type":"ContainerDied","Data":"32efc8fa9dcbe07b0ad20aeb20e753bed04d08397e2154403aa53e5e3c54ecd9"} Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.991428 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32efc8fa9dcbe07b0ad20aeb20e753bed04d08397e2154403aa53e5e3c54ecd9" Nov 21 15:15:04 crc kubenswrapper[4904]: I1121 15:15:04.991496 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz" Nov 21 15:15:05 crc kubenswrapper[4904]: I1121 15:15:05.017521 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fhbb6" podStartSLOduration=4.258099986 podStartE2EDuration="7.017501401s" podCreationTimestamp="2025-11-21 15:14:58 +0000 UTC" firstStartedPulling="2025-11-21 15:15:00.882609715 +0000 UTC m=+6175.004142267" lastFinishedPulling="2025-11-21 15:15:03.64201114 +0000 UTC m=+6177.763543682" observedRunningTime="2025-11-21 15:15:05.012921449 +0000 UTC m=+6179.134454001" watchObservedRunningTime="2025-11-21 15:15:05.017501401 +0000 UTC m=+6179.139033953" Nov 21 15:15:05 crc kubenswrapper[4904]: I1121 15:15:05.501643 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl"] Nov 21 15:15:05 crc kubenswrapper[4904]: I1121 15:15:05.518574 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395590-htrvl"] Nov 21 15:15:06 crc kubenswrapper[4904]: I1121 15:15:06.830450 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c83b3a3-48f6-49d9-9ed8-ad61323dcf96" path="/var/lib/kubelet/pods/1c83b3a3-48f6-49d9-9ed8-ad61323dcf96/volumes" Nov 21 15:15:09 crc kubenswrapper[4904]: I1121 15:15:09.092590 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:15:09 crc kubenswrapper[4904]: I1121 15:15:09.093588 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:15:10 crc kubenswrapper[4904]: I1121 15:15:10.181204 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fhbb6" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:10 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:10 crc kubenswrapper[4904]: > Nov 21 15:15:11 crc kubenswrapper[4904]: E1121 15:15:11.516260 4904 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2ad9cf_ebeb_4fac_ae58_b90dba1534f8.slice/crio-conmon-8ac8feae9df59e3c33a420cc162ff07ed2afc071667fcaf7ea59ce3d5672e09c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde2ad9cf_ebeb_4fac_ae58_b90dba1534f8.slice/crio-8ac8feae9df59e3c33a420cc162ff07ed2afc071667fcaf7ea59ce3d5672e09c.scope\": RecentStats: unable to find data in memory cache]" Nov 21 15:15:12 crc kubenswrapper[4904]: I1121 15:15:12.095033 4904 generic.go:334] "Generic (PLEG): container finished" podID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerID="8ac8feae9df59e3c33a420cc162ff07ed2afc071667fcaf7ea59ce3d5672e09c" exitCode=0 Nov 21 15:15:12 crc kubenswrapper[4904]: I1121 15:15:12.095185 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerDied","Data":"8ac8feae9df59e3c33a420cc162ff07ed2afc071667fcaf7ea59ce3d5672e09c"} Nov 21 15:15:14 crc kubenswrapper[4904]: I1121 15:15:14.127899 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerStarted","Data":"03bb4c5cc690466e3fa6abae07f15bd4666303e623c0beef9c2a49c5943d1348"} Nov 21 15:15:14 crc kubenswrapper[4904]: I1121 15:15:14.149181 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtvcz" podStartSLOduration=2.434808041 podStartE2EDuration="15.149161125s" podCreationTimestamp="2025-11-21 15:14:59 +0000 UTC" firstStartedPulling="2025-11-21 15:15:00.8741566 +0000 UTC m=+6174.995689152" lastFinishedPulling="2025-11-21 15:15:13.588509684 +0000 UTC m=+6187.710042236" observedRunningTime="2025-11-21 15:15:14.147455413 +0000 UTC m=+6188.268987975" watchObservedRunningTime="2025-11-21 15:15:14.149161125 +0000 UTC m=+6188.270693687" Nov 21 15:15:19 crc kubenswrapper[4904]: I1121 15:15:19.163977 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:15:19 crc kubenswrapper[4904]: I1121 15:15:19.241419 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:15:19 crc kubenswrapper[4904]: I1121 15:15:19.579449 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:15:19 crc kubenswrapper[4904]: I1121 15:15:19.579986 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:15:20 crc kubenswrapper[4904]: I1121 15:15:20.383877 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhbb6"] Nov 21 15:15:20 crc kubenswrapper[4904]: I1121 15:15:20.384146 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fhbb6" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="registry-server" containerID="cri-o://3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5" gracePeriod=2 Nov 21 15:15:20 crc kubenswrapper[4904]: I1121 15:15:20.638015 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtvcz" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:20 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:20 crc kubenswrapper[4904]: > Nov 21 15:15:20 crc kubenswrapper[4904]: I1121 15:15:20.986422 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.079754 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-catalog-content\") pod \"7acd54e9-b4de-4b50-b562-247b0d1906fd\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.079921 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-utilities\") pod \"7acd54e9-b4de-4b50-b562-247b0d1906fd\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.080133 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsbxw\" (UniqueName: \"kubernetes.io/projected/7acd54e9-b4de-4b50-b562-247b0d1906fd-kube-api-access-vsbxw\") pod \"7acd54e9-b4de-4b50-b562-247b0d1906fd\" (UID: \"7acd54e9-b4de-4b50-b562-247b0d1906fd\") " Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.080607 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-utilities" (OuterVolumeSpecName: "utilities") pod "7acd54e9-b4de-4b50-b562-247b0d1906fd" (UID: "7acd54e9-b4de-4b50-b562-247b0d1906fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.081054 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.093282 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acd54e9-b4de-4b50-b562-247b0d1906fd-kube-api-access-vsbxw" (OuterVolumeSpecName: "kube-api-access-vsbxw") pod "7acd54e9-b4de-4b50-b562-247b0d1906fd" (UID: "7acd54e9-b4de-4b50-b562-247b0d1906fd"). InnerVolumeSpecName "kube-api-access-vsbxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.098107 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7acd54e9-b4de-4b50-b562-247b0d1906fd" (UID: "7acd54e9-b4de-4b50-b562-247b0d1906fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.184000 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7acd54e9-b4de-4b50-b562-247b0d1906fd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.184050 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsbxw\" (UniqueName: \"kubernetes.io/projected/7acd54e9-b4de-4b50-b562-247b0d1906fd-kube-api-access-vsbxw\") on node \"crc\" DevicePath \"\"" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.212184 4904 generic.go:334] "Generic (PLEG): container finished" podID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerID="3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5" exitCode=0 Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.212249 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhbb6" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.212260 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerDied","Data":"3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5"} Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.212328 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhbb6" event={"ID":"7acd54e9-b4de-4b50-b562-247b0d1906fd","Type":"ContainerDied","Data":"7c6751acef5f5b0c101802eefad5d7e542e2617e77274306d5cc866750947b37"} Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.212369 4904 scope.go:117] "RemoveContainer" containerID="3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.248316 4904 scope.go:117] "RemoveContainer" containerID="d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.255366 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhbb6"] Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.274463 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhbb6"] Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.313427 4904 scope.go:117] "RemoveContainer" containerID="6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.352067 4904 scope.go:117] "RemoveContainer" containerID="3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5" Nov 21 15:15:21 crc kubenswrapper[4904]: E1121 15:15:21.353538 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5\": container with ID starting with 3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5 not found: ID does not exist" containerID="3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.353605 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5"} err="failed to get container status \"3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5\": rpc error: code = NotFound desc = could not find container \"3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5\": container with ID starting with 3010bd5c58040edbc137905f4d5d399280f2b8106991e62137296099510b7de5 not found: ID does not exist" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.353635 4904 scope.go:117] "RemoveContainer" containerID="d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6" Nov 21 15:15:21 crc kubenswrapper[4904]: E1121 15:15:21.354835 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6\": container with ID starting with d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6 not found: ID does not exist" containerID="d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.354876 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6"} err="failed to get container status \"d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6\": rpc error: code = NotFound desc = could not find container \"d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6\": container with ID starting with d5def84a86f619aa1b77cec7b32c160031c79f805d0cc8a0e1a2bac789784af6 not found: ID does not exist" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.354906 4904 scope.go:117] "RemoveContainer" containerID="6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe" Nov 21 15:15:21 crc kubenswrapper[4904]: E1121 15:15:21.355519 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe\": container with ID starting with 6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe not found: ID does not exist" containerID="6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe" Nov 21 15:15:21 crc kubenswrapper[4904]: I1121 15:15:21.355644 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe"} err="failed to get container status \"6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe\": rpc error: code = NotFound desc = could not find container \"6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe\": container with ID starting with 6882b25eaa0c56cd819f1f9b8eb0a35d97bb9da475cb823779ca5a5ea34ccffe not found: ID does not exist" Nov 21 15:15:22 crc kubenswrapper[4904]: I1121 15:15:22.526886 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" path="/var/lib/kubelet/pods/7acd54e9-b4de-4b50-b562-247b0d1906fd/volumes" Nov 21 15:15:30 crc kubenswrapper[4904]: I1121 15:15:30.637313 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtvcz" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:30 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:30 crc kubenswrapper[4904]: > Nov 21 15:15:34 crc kubenswrapper[4904]: I1121 15:15:34.279797 4904 scope.go:117] "RemoveContainer" containerID="db5744a00aa33e6fd6a0465fd38de426218b26aa5172e6342b45aa4b560c58e7" Nov 21 15:15:40 crc kubenswrapper[4904]: I1121 15:15:40.633139 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtvcz" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:40 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:40 crc kubenswrapper[4904]: > Nov 21 15:15:50 crc kubenswrapper[4904]: I1121 15:15:50.637559 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtvcz" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" probeResult="failure" output=< Nov 21 15:15:50 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:15:50 crc kubenswrapper[4904]: > Nov 21 15:15:59 crc kubenswrapper[4904]: I1121 15:15:59.634867 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:15:59 crc kubenswrapper[4904]: I1121 15:15:59.704720 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:16:00 crc kubenswrapper[4904]: I1121 15:16:00.366452 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtvcz"] Nov 21 15:16:00 crc kubenswrapper[4904]: I1121 15:16:00.759919 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtvcz" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" containerID="cri-o://03bb4c5cc690466e3fa6abae07f15bd4666303e623c0beef9c2a49c5943d1348" gracePeriod=2 Nov 21 15:16:01 crc kubenswrapper[4904]: I1121 15:16:01.775160 4904 generic.go:334] "Generic (PLEG): container finished" podID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerID="03bb4c5cc690466e3fa6abae07f15bd4666303e623c0beef9c2a49c5943d1348" exitCode=0 Nov 21 15:16:01 crc kubenswrapper[4904]: I1121 15:16:01.775826 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerDied","Data":"03bb4c5cc690466e3fa6abae07f15bd4666303e623c0beef9c2a49c5943d1348"} Nov 21 15:16:01 crc kubenswrapper[4904]: I1121 15:16:01.970968 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.087195 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-utilities\") pod \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.087248 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-catalog-content\") pod \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.087589 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s55vw\" (UniqueName: \"kubernetes.io/projected/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-kube-api-access-s55vw\") pod \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\" (UID: \"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8\") " Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.088301 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-utilities" (OuterVolumeSpecName: "utilities") pod "de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" (UID: "de2ad9cf-ebeb-4fac-ae58-b90dba1534f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.097376 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-kube-api-access-s55vw" (OuterVolumeSpecName: "kube-api-access-s55vw") pod "de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" (UID: "de2ad9cf-ebeb-4fac-ae58-b90dba1534f8"). InnerVolumeSpecName "kube-api-access-s55vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.185008 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" (UID: "de2ad9cf-ebeb-4fac-ae58-b90dba1534f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.189989 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s55vw\" (UniqueName: \"kubernetes.io/projected/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-kube-api-access-s55vw\") on node \"crc\" DevicePath \"\"" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.190016 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.190027 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.802672 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtvcz" event={"ID":"de2ad9cf-ebeb-4fac-ae58-b90dba1534f8","Type":"ContainerDied","Data":"d225ec16d188ee0c8c60fb89603f0b46fd8e60f3a85c52707b0c0c18d6fbf82f"} Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.803177 4904 scope.go:117] "RemoveContainer" containerID="03bb4c5cc690466e3fa6abae07f15bd4666303e623c0beef9c2a49c5943d1348" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.802991 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtvcz" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.864517 4904 scope.go:117] "RemoveContainer" containerID="8ac8feae9df59e3c33a420cc162ff07ed2afc071667fcaf7ea59ce3d5672e09c" Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.871828 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtvcz"] Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.932518 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtvcz"] Nov 21 15:16:02 crc kubenswrapper[4904]: I1121 15:16:02.953223 4904 scope.go:117] "RemoveContainer" containerID="9a2b10caf92d5a1d83c9518527463dd5d4215b3e9b0a1330389393f7d1f11cbf" Nov 21 15:16:04 crc kubenswrapper[4904]: I1121 15:16:04.527413 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" path="/var/lib/kubelet/pods/de2ad9cf-ebeb-4fac-ae58-b90dba1534f8/volumes" Nov 21 15:16:58 crc kubenswrapper[4904]: I1121 15:16:58.113167 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:16:58 crc kubenswrapper[4904]: I1121 15:16:58.113857 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:17:11 crc kubenswrapper[4904]: E1121 15:17:11.743695 4904 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.138:35066->38.102.83.138:45309: write tcp 38.102.83.138:35066->38.102.83.138:45309: write: broken pipe Nov 21 15:17:28 crc kubenswrapper[4904]: I1121 15:17:28.113853 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:17:28 crc kubenswrapper[4904]: I1121 15:17:28.114475 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:17:58 crc kubenswrapper[4904]: I1121 15:17:58.113544 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:17:58 crc kubenswrapper[4904]: I1121 15:17:58.114225 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:17:58 crc kubenswrapper[4904]: I1121 15:17:58.114278 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:17:58 crc kubenswrapper[4904]: I1121 15:17:58.115007 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:17:58 crc kubenswrapper[4904]: I1121 15:17:58.115054 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" gracePeriod=600 Nov 21 15:17:58 crc kubenswrapper[4904]: E1121 15:17:58.320546 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:17:59 crc kubenswrapper[4904]: I1121 15:17:59.165987 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" exitCode=0 Nov 21 15:17:59 crc kubenswrapper[4904]: I1121 15:17:59.166121 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb"} Nov 21 15:17:59 crc kubenswrapper[4904]: I1121 15:17:59.166434 4904 scope.go:117] "RemoveContainer" containerID="710cfcaf356741555016c5de006a3d484493bacb33070045a853663ef14fb9ed" Nov 21 15:17:59 crc kubenswrapper[4904]: I1121 15:17:59.167768 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:17:59 crc kubenswrapper[4904]: E1121 15:17:59.168215 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:18:14 crc kubenswrapper[4904]: I1121 15:18:14.512967 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:18:14 crc kubenswrapper[4904]: E1121 15:18:14.513950 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:18:25 crc kubenswrapper[4904]: I1121 15:18:25.513522 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:18:25 crc kubenswrapper[4904]: E1121 15:18:25.514538 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:18:36 crc kubenswrapper[4904]: I1121 15:18:36.521968 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:18:36 crc kubenswrapper[4904]: E1121 15:18:36.522864 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:18:47 crc kubenswrapper[4904]: I1121 15:18:47.513635 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:18:47 crc kubenswrapper[4904]: E1121 15:18:47.516475 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:18:58 crc kubenswrapper[4904]: I1121 15:18:58.513389 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:18:58 crc kubenswrapper[4904]: E1121 15:18:58.514456 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:19:13 crc kubenswrapper[4904]: I1121 15:19:13.512814 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:19:13 crc kubenswrapper[4904]: E1121 15:19:13.513629 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.859346 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nwfhb"] Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.860827 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="extract-content" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.860848 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="extract-content" Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.860884 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.860893 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.860917 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="extract-utilities" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.860928 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="extract-utilities" Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.860951 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="extract-content" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.860960 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="extract-content" Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.860974 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="extract-utilities" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.860984 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="extract-utilities" Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.861001 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d7c9133-5ded-4a51-bc40-7d0f166fe17b" containerName="collect-profiles" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.861009 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d7c9133-5ded-4a51-bc40-7d0f166fe17b" containerName="collect-profiles" Nov 21 15:19:16 crc kubenswrapper[4904]: E1121 15:19:16.861043 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="registry-server" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.861052 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="registry-server" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.861392 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acd54e9-b4de-4b50-b562-247b0d1906fd" containerName="registry-server" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.861408 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d7c9133-5ded-4a51-bc40-7d0f166fe17b" containerName="collect-profiles" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.861440 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="de2ad9cf-ebeb-4fac-ae58-b90dba1534f8" containerName="registry-server" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.864160 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:16 crc kubenswrapper[4904]: I1121 15:19:16.874999 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwfhb"] Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.065773 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-utilities\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.065838 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-catalog-content\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.065882 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm22q\" (UniqueName: \"kubernetes.io/projected/46a939fa-c1e9-4a70-966e-933393f0e174-kube-api-access-gm22q\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.170458 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-utilities\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.170541 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-catalog-content\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.170583 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm22q\" (UniqueName: \"kubernetes.io/projected/46a939fa-c1e9-4a70-966e-933393f0e174-kube-api-access-gm22q\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.171241 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-catalog-content\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.171417 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-utilities\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.195112 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm22q\" (UniqueName: \"kubernetes.io/projected/46a939fa-c1e9-4a70-966e-933393f0e174-kube-api-access-gm22q\") pod \"community-operators-nwfhb\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:17 crc kubenswrapper[4904]: I1121 15:19:17.495122 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:18 crc kubenswrapper[4904]: I1121 15:19:18.082906 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nwfhb"] Nov 21 15:19:19 crc kubenswrapper[4904]: I1121 15:19:19.097311 4904 generic.go:334] "Generic (PLEG): container finished" podID="46a939fa-c1e9-4a70-966e-933393f0e174" containerID="9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8" exitCode=0 Nov 21 15:19:19 crc kubenswrapper[4904]: I1121 15:19:19.097429 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerDied","Data":"9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8"} Nov 21 15:19:19 crc kubenswrapper[4904]: I1121 15:19:19.097781 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerStarted","Data":"3579fe12eaf08c0086a99e6a3ed7c4a4233d677ab4220ae0c750c58a79f14ccd"} Nov 21 15:19:22 crc kubenswrapper[4904]: I1121 15:19:22.133453 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerStarted","Data":"f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219"} Nov 21 15:19:25 crc kubenswrapper[4904]: I1121 15:19:25.196350 4904 generic.go:334] "Generic (PLEG): container finished" podID="46a939fa-c1e9-4a70-966e-933393f0e174" containerID="f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219" exitCode=0 Nov 21 15:19:25 crc kubenswrapper[4904]: I1121 15:19:25.196436 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerDied","Data":"f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219"} Nov 21 15:19:26 crc kubenswrapper[4904]: I1121 15:19:26.217942 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerStarted","Data":"7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3"} Nov 21 15:19:26 crc kubenswrapper[4904]: I1121 15:19:26.247549 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nwfhb" podStartSLOduration=3.614565527 podStartE2EDuration="10.247529973s" podCreationTimestamp="2025-11-21 15:19:16 +0000 UTC" firstStartedPulling="2025-11-21 15:19:19.10051776 +0000 UTC m=+6433.222050302" lastFinishedPulling="2025-11-21 15:19:25.733482196 +0000 UTC m=+6439.855014748" observedRunningTime="2025-11-21 15:19:26.238317849 +0000 UTC m=+6440.359850401" watchObservedRunningTime="2025-11-21 15:19:26.247529973 +0000 UTC m=+6440.369062535" Nov 21 15:19:26 crc kubenswrapper[4904]: I1121 15:19:26.527757 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:19:26 crc kubenswrapper[4904]: E1121 15:19:26.528028 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:19:27 crc kubenswrapper[4904]: I1121 15:19:27.495501 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:27 crc kubenswrapper[4904]: I1121 15:19:27.496402 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:28 crc kubenswrapper[4904]: I1121 15:19:28.557949 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nwfhb" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="registry-server" probeResult="failure" output=< Nov 21 15:19:28 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:19:28 crc kubenswrapper[4904]: > Nov 21 15:19:37 crc kubenswrapper[4904]: I1121 15:19:37.551401 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:37 crc kubenswrapper[4904]: I1121 15:19:37.621490 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:37 crc kubenswrapper[4904]: I1121 15:19:37.803053 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwfhb"] Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.376895 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nwfhb" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="registry-server" containerID="cri-o://7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3" gracePeriod=2 Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.946188 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.958871 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-utilities\") pod \"46a939fa-c1e9-4a70-966e-933393f0e174\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.958962 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-catalog-content\") pod \"46a939fa-c1e9-4a70-966e-933393f0e174\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.959151 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm22q\" (UniqueName: \"kubernetes.io/projected/46a939fa-c1e9-4a70-966e-933393f0e174-kube-api-access-gm22q\") pod \"46a939fa-c1e9-4a70-966e-933393f0e174\" (UID: \"46a939fa-c1e9-4a70-966e-933393f0e174\") " Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.962311 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-utilities" (OuterVolumeSpecName: "utilities") pod "46a939fa-c1e9-4a70-966e-933393f0e174" (UID: "46a939fa-c1e9-4a70-966e-933393f0e174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:19:39 crc kubenswrapper[4904]: I1121 15:19:39.979705 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a939fa-c1e9-4a70-966e-933393f0e174-kube-api-access-gm22q" (OuterVolumeSpecName: "kube-api-access-gm22q") pod "46a939fa-c1e9-4a70-966e-933393f0e174" (UID: "46a939fa-c1e9-4a70-966e-933393f0e174"). InnerVolumeSpecName "kube-api-access-gm22q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.063375 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.063421 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm22q\" (UniqueName: \"kubernetes.io/projected/46a939fa-c1e9-4a70-966e-933393f0e174-kube-api-access-gm22q\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.086282 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46a939fa-c1e9-4a70-966e-933393f0e174" (UID: "46a939fa-c1e9-4a70-966e-933393f0e174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.166337 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46a939fa-c1e9-4a70-966e-933393f0e174-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.390118 4904 generic.go:334] "Generic (PLEG): container finished" podID="46a939fa-c1e9-4a70-966e-933393f0e174" containerID="7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3" exitCode=0 Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.390165 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerDied","Data":"7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3"} Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.390176 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nwfhb" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.390196 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nwfhb" event={"ID":"46a939fa-c1e9-4a70-966e-933393f0e174","Type":"ContainerDied","Data":"3579fe12eaf08c0086a99e6a3ed7c4a4233d677ab4220ae0c750c58a79f14ccd"} Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.390216 4904 scope.go:117] "RemoveContainer" containerID="7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.429794 4904 scope.go:117] "RemoveContainer" containerID="f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.437710 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nwfhb"] Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.449070 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nwfhb"] Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.464754 4904 scope.go:117] "RemoveContainer" containerID="9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.529702 4904 scope.go:117] "RemoveContainer" containerID="7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3" Nov 21 15:19:40 crc kubenswrapper[4904]: E1121 15:19:40.530281 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3\": container with ID starting with 7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3 not found: ID does not exist" containerID="7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.530324 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3"} err="failed to get container status \"7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3\": rpc error: code = NotFound desc = could not find container \"7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3\": container with ID starting with 7f76798c2716188fd23e82c8fa8c9307134a79dfd33f85e7ea86aa5f330a12e3 not found: ID does not exist" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.530355 4904 scope.go:117] "RemoveContainer" containerID="f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219" Nov 21 15:19:40 crc kubenswrapper[4904]: E1121 15:19:40.531060 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219\": container with ID starting with f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219 not found: ID does not exist" containerID="f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.531086 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219"} err="failed to get container status \"f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219\": rpc error: code = NotFound desc = could not find container \"f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219\": container with ID starting with f04dd146ff3628b1ef635d35bd1405972107ddb6e2e27216ae63b94f967a9219 not found: ID does not exist" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.531103 4904 scope.go:117] "RemoveContainer" containerID="9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.531429 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" path="/var/lib/kubelet/pods/46a939fa-c1e9-4a70-966e-933393f0e174/volumes" Nov 21 15:19:40 crc kubenswrapper[4904]: E1121 15:19:40.531459 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8\": container with ID starting with 9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8 not found: ID does not exist" containerID="9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8" Nov 21 15:19:40 crc kubenswrapper[4904]: I1121 15:19:40.531485 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8"} err="failed to get container status \"9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8\": rpc error: code = NotFound desc = could not find container \"9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8\": container with ID starting with 9fdb14b922cb3871eaf0e39f413bc9e1f80c8df66e56632bba18b20901ef8ae8 not found: ID does not exist" Nov 21 15:19:41 crc kubenswrapper[4904]: I1121 15:19:41.513819 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:19:41 crc kubenswrapper[4904]: E1121 15:19:41.514519 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:19:55 crc kubenswrapper[4904]: I1121 15:19:55.514260 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:19:55 crc kubenswrapper[4904]: E1121 15:19:55.515713 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:20:08 crc kubenswrapper[4904]: I1121 15:20:08.514279 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:20:08 crc kubenswrapper[4904]: E1121 15:20:08.515550 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:20:20 crc kubenswrapper[4904]: I1121 15:20:20.514071 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:20:20 crc kubenswrapper[4904]: E1121 15:20:20.515130 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:20:35 crc kubenswrapper[4904]: I1121 15:20:35.513409 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:20:35 crc kubenswrapper[4904]: E1121 15:20:35.514281 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:20:48 crc kubenswrapper[4904]: I1121 15:20:48.513993 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:20:48 crc kubenswrapper[4904]: E1121 15:20:48.514865 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:20:59 crc kubenswrapper[4904]: I1121 15:20:59.513636 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:20:59 crc kubenswrapper[4904]: E1121 15:20:59.514543 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:21:10 crc kubenswrapper[4904]: I1121 15:21:10.513710 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:21:10 crc kubenswrapper[4904]: E1121 15:21:10.514536 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:21:25 crc kubenswrapper[4904]: I1121 15:21:25.513708 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:21:25 crc kubenswrapper[4904]: E1121 15:21:25.516184 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:21:36 crc kubenswrapper[4904]: I1121 15:21:36.525233 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:21:36 crc kubenswrapper[4904]: E1121 15:21:36.526411 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:21:50 crc kubenswrapper[4904]: I1121 15:21:50.513411 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:21:50 crc kubenswrapper[4904]: E1121 15:21:50.514229 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:22:01 crc kubenswrapper[4904]: I1121 15:22:01.514277 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:22:01 crc kubenswrapper[4904]: E1121 15:22:01.516345 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:22:13 crc kubenswrapper[4904]: I1121 15:22:13.513597 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:22:13 crc kubenswrapper[4904]: E1121 15:22:13.514425 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:22:28 crc kubenswrapper[4904]: I1121 15:22:28.514697 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:22:28 crc kubenswrapper[4904]: E1121 15:22:28.516455 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:22:40 crc kubenswrapper[4904]: I1121 15:22:40.513007 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:22:40 crc kubenswrapper[4904]: E1121 15:22:40.513809 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:22:54 crc kubenswrapper[4904]: I1121 15:22:54.518172 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:22:54 crc kubenswrapper[4904]: E1121 15:22:54.524266 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:23:08 crc kubenswrapper[4904]: I1121 15:23:08.513720 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:23:08 crc kubenswrapper[4904]: I1121 15:23:08.963822 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"efc849038e17108e704a37f20b69c5f17e36055460897e411ab175e6ad4b7a29"} Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.783173 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s4trg"] Nov 21 15:23:23 crc kubenswrapper[4904]: E1121 15:23:23.784351 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="registry-server" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.784367 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="registry-server" Nov 21 15:23:23 crc kubenswrapper[4904]: E1121 15:23:23.784394 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="extract-utilities" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.784404 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="extract-utilities" Nov 21 15:23:23 crc kubenswrapper[4904]: E1121 15:23:23.784446 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="extract-content" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.784454 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="extract-content" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.784764 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a939fa-c1e9-4a70-966e-933393f0e174" containerName="registry-server" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.786960 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.811962 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4trg"] Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.868728 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-catalog-content\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.868826 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-utilities\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.868901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfxb\" (UniqueName: \"kubernetes.io/projected/a09088cc-4f5f-4112-9c67-139b6979d342-kube-api-access-cjfxb\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.969966 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-catalog-content\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.970053 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-utilities\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.970120 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfxb\" (UniqueName: \"kubernetes.io/projected/a09088cc-4f5f-4112-9c67-139b6979d342-kube-api-access-cjfxb\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.971055 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-catalog-content\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.971153 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-utilities\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:23 crc kubenswrapper[4904]: I1121 15:23:23.994098 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfxb\" (UniqueName: \"kubernetes.io/projected/a09088cc-4f5f-4112-9c67-139b6979d342-kube-api-access-cjfxb\") pod \"certified-operators-s4trg\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:24 crc kubenswrapper[4904]: I1121 15:23:24.110470 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:24 crc kubenswrapper[4904]: I1121 15:23:24.697048 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s4trg"] Nov 21 15:23:24 crc kubenswrapper[4904]: W1121 15:23:24.701273 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda09088cc_4f5f_4112_9c67_139b6979d342.slice/crio-a5a462ec009d9ac4bbd38755164fd14a6d5bf5bedf4845abfe456c92a9135c60 WatchSource:0}: Error finding container a5a462ec009d9ac4bbd38755164fd14a6d5bf5bedf4845abfe456c92a9135c60: Status 404 returned error can't find the container with id a5a462ec009d9ac4bbd38755164fd14a6d5bf5bedf4845abfe456c92a9135c60 Nov 21 15:23:25 crc kubenswrapper[4904]: I1121 15:23:25.143958 4904 generic.go:334] "Generic (PLEG): container finished" podID="a09088cc-4f5f-4112-9c67-139b6979d342" containerID="68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689" exitCode=0 Nov 21 15:23:25 crc kubenswrapper[4904]: I1121 15:23:25.144045 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerDied","Data":"68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689"} Nov 21 15:23:25 crc kubenswrapper[4904]: I1121 15:23:25.146904 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerStarted","Data":"a5a462ec009d9ac4bbd38755164fd14a6d5bf5bedf4845abfe456c92a9135c60"} Nov 21 15:23:25 crc kubenswrapper[4904]: I1121 15:23:25.146389 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:23:27 crc kubenswrapper[4904]: I1121 15:23:27.171265 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerStarted","Data":"165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622"} Nov 21 15:23:28 crc kubenswrapper[4904]: I1121 15:23:28.183487 4904 generic.go:334] "Generic (PLEG): container finished" podID="a09088cc-4f5f-4112-9c67-139b6979d342" containerID="165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622" exitCode=0 Nov 21 15:23:28 crc kubenswrapper[4904]: I1121 15:23:28.183556 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerDied","Data":"165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622"} Nov 21 15:23:29 crc kubenswrapper[4904]: I1121 15:23:29.197573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerStarted","Data":"2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6"} Nov 21 15:23:29 crc kubenswrapper[4904]: I1121 15:23:29.237517 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s4trg" podStartSLOduration=2.572005605 podStartE2EDuration="6.237492639s" podCreationTimestamp="2025-11-21 15:23:23 +0000 UTC" firstStartedPulling="2025-11-21 15:23:25.146154486 +0000 UTC m=+6679.267687038" lastFinishedPulling="2025-11-21 15:23:28.81164152 +0000 UTC m=+6682.933174072" observedRunningTime="2025-11-21 15:23:29.219835229 +0000 UTC m=+6683.341367781" watchObservedRunningTime="2025-11-21 15:23:29.237492639 +0000 UTC m=+6683.359025201" Nov 21 15:23:34 crc kubenswrapper[4904]: I1121 15:23:34.110826 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:34 crc kubenswrapper[4904]: I1121 15:23:34.111424 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:34 crc kubenswrapper[4904]: I1121 15:23:34.160868 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:34 crc kubenswrapper[4904]: I1121 15:23:34.354401 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:34 crc kubenswrapper[4904]: I1121 15:23:34.408797 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4trg"] Nov 21 15:23:36 crc kubenswrapper[4904]: I1121 15:23:36.334278 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s4trg" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="registry-server" containerID="cri-o://2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6" gracePeriod=2 Nov 21 15:23:36 crc kubenswrapper[4904]: I1121 15:23:36.898958 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:36 crc kubenswrapper[4904]: I1121 15:23:36.995740 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-utilities\") pod \"a09088cc-4f5f-4112-9c67-139b6979d342\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " Nov 21 15:23:36 crc kubenswrapper[4904]: I1121 15:23:36.995843 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-catalog-content\") pod \"a09088cc-4f5f-4112-9c67-139b6979d342\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " Nov 21 15:23:36 crc kubenswrapper[4904]: I1121 15:23:36.995927 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfxb\" (UniqueName: \"kubernetes.io/projected/a09088cc-4f5f-4112-9c67-139b6979d342-kube-api-access-cjfxb\") pod \"a09088cc-4f5f-4112-9c67-139b6979d342\" (UID: \"a09088cc-4f5f-4112-9c67-139b6979d342\") " Nov 21 15:23:36 crc kubenswrapper[4904]: I1121 15:23:36.996771 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-utilities" (OuterVolumeSpecName: "utilities") pod "a09088cc-4f5f-4112-9c67-139b6979d342" (UID: "a09088cc-4f5f-4112-9c67-139b6979d342"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.005204 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09088cc-4f5f-4112-9c67-139b6979d342-kube-api-access-cjfxb" (OuterVolumeSpecName: "kube-api-access-cjfxb") pod "a09088cc-4f5f-4112-9c67-139b6979d342" (UID: "a09088cc-4f5f-4112-9c67-139b6979d342"). InnerVolumeSpecName "kube-api-access-cjfxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.050207 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a09088cc-4f5f-4112-9c67-139b6979d342" (UID: "a09088cc-4f5f-4112-9c67-139b6979d342"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.098238 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.098279 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a09088cc-4f5f-4112-9c67-139b6979d342-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.098293 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfxb\" (UniqueName: \"kubernetes.io/projected/a09088cc-4f5f-4112-9c67-139b6979d342-kube-api-access-cjfxb\") on node \"crc\" DevicePath \"\"" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.348023 4904 generic.go:334] "Generic (PLEG): container finished" podID="a09088cc-4f5f-4112-9c67-139b6979d342" containerID="2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6" exitCode=0 Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.348079 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerDied","Data":"2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6"} Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.348107 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s4trg" event={"ID":"a09088cc-4f5f-4112-9c67-139b6979d342","Type":"ContainerDied","Data":"a5a462ec009d9ac4bbd38755164fd14a6d5bf5bedf4845abfe456c92a9135c60"} Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.348123 4904 scope.go:117] "RemoveContainer" containerID="2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.348288 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s4trg" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.380896 4904 scope.go:117] "RemoveContainer" containerID="165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.394336 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s4trg"] Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.412168 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s4trg"] Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.412806 4904 scope.go:117] "RemoveContainer" containerID="68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.455467 4904 scope.go:117] "RemoveContainer" containerID="2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6" Nov 21 15:23:37 crc kubenswrapper[4904]: E1121 15:23:37.456041 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6\": container with ID starting with 2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6 not found: ID does not exist" containerID="2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.456081 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6"} err="failed to get container status \"2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6\": rpc error: code = NotFound desc = could not find container \"2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6\": container with ID starting with 2198c91e4407a8c015a2c5523e500ff630dae50ae3809a01e9df82dea3ffe0c6 not found: ID does not exist" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.456111 4904 scope.go:117] "RemoveContainer" containerID="165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622" Nov 21 15:23:37 crc kubenswrapper[4904]: E1121 15:23:37.456346 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622\": container with ID starting with 165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622 not found: ID does not exist" containerID="165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.456380 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622"} err="failed to get container status \"165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622\": rpc error: code = NotFound desc = could not find container \"165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622\": container with ID starting with 165b8fb0da9a9ab4ab8543dca978cfb5f0cc1707f8a4878e3a1f7b3cf465c622 not found: ID does not exist" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.456398 4904 scope.go:117] "RemoveContainer" containerID="68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689" Nov 21 15:23:37 crc kubenswrapper[4904]: E1121 15:23:37.456604 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689\": container with ID starting with 68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689 not found: ID does not exist" containerID="68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689" Nov 21 15:23:37 crc kubenswrapper[4904]: I1121 15:23:37.456634 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689"} err="failed to get container status \"68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689\": rpc error: code = NotFound desc = could not find container \"68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689\": container with ID starting with 68c2069b4ca27c46b4496b5710aed0fb17b65339e2cce8733e5cdf0e5fce2689 not found: ID does not exist" Nov 21 15:23:38 crc kubenswrapper[4904]: I1121 15:23:38.526909 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" path="/var/lib/kubelet/pods/a09088cc-4f5f-4112-9c67-139b6979d342/volumes" Nov 21 15:25:28 crc kubenswrapper[4904]: I1121 15:25:28.113525 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:25:28 crc kubenswrapper[4904]: I1121 15:25:28.115837 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:25:58 crc kubenswrapper[4904]: I1121 15:25:58.116085 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:25:58 crc kubenswrapper[4904]: I1121 15:25:58.116620 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.512002 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pmxjp"] Nov 21 15:26:10 crc kubenswrapper[4904]: E1121 15:26:10.512912 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="registry-server" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.512925 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="registry-server" Nov 21 15:26:10 crc kubenswrapper[4904]: E1121 15:26:10.512951 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="extract-utilities" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.512956 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="extract-utilities" Nov 21 15:26:10 crc kubenswrapper[4904]: E1121 15:26:10.512981 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="extract-content" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.512989 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="extract-content" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.513203 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09088cc-4f5f-4112-9c67-139b6979d342" containerName="registry-server" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.514831 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.531780 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmxjp"] Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.660074 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-utilities\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.660170 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcjk\" (UniqueName: \"kubernetes.io/projected/83d65ec9-6929-4769-89fd-166a9ffe8da4-kube-api-access-jmcjk\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.660328 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-catalog-content\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.762204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-catalog-content\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.762328 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-utilities\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.762368 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcjk\" (UniqueName: \"kubernetes.io/projected/83d65ec9-6929-4769-89fd-166a9ffe8da4-kube-api-access-jmcjk\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.763166 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-catalog-content\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.763386 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-utilities\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.786461 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcjk\" (UniqueName: \"kubernetes.io/projected/83d65ec9-6929-4769-89fd-166a9ffe8da4-kube-api-access-jmcjk\") pod \"redhat-marketplace-pmxjp\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:10 crc kubenswrapper[4904]: I1121 15:26:10.837974 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:11 crc kubenswrapper[4904]: I1121 15:26:11.419187 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmxjp"] Nov 21 15:26:12 crc kubenswrapper[4904]: I1121 15:26:12.149196 4904 generic.go:334] "Generic (PLEG): container finished" podID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerID="b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922" exitCode=0 Nov 21 15:26:12 crc kubenswrapper[4904]: I1121 15:26:12.149385 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerDied","Data":"b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922"} Nov 21 15:26:12 crc kubenswrapper[4904]: I1121 15:26:12.149741 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerStarted","Data":"e6d42ae8f4926db13112ed6cebab475b05d7e8a3738d1463e6c38997cc31650e"} Nov 21 15:26:14 crc kubenswrapper[4904]: I1121 15:26:14.193263 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerStarted","Data":"e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f"} Nov 21 15:26:15 crc kubenswrapper[4904]: I1121 15:26:15.222897 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerDied","Data":"e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f"} Nov 21 15:26:15 crc kubenswrapper[4904]: I1121 15:26:15.222812 4904 generic.go:334] "Generic (PLEG): container finished" podID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerID="e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f" exitCode=0 Nov 21 15:26:16 crc kubenswrapper[4904]: I1121 15:26:16.238848 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerStarted","Data":"ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8"} Nov 21 15:26:16 crc kubenswrapper[4904]: I1121 15:26:16.272755 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pmxjp" podStartSLOduration=2.796048859 podStartE2EDuration="6.272734937s" podCreationTimestamp="2025-11-21 15:26:10 +0000 UTC" firstStartedPulling="2025-11-21 15:26:12.151712152 +0000 UTC m=+6846.273244704" lastFinishedPulling="2025-11-21 15:26:15.62839823 +0000 UTC m=+6849.749930782" observedRunningTime="2025-11-21 15:26:16.272508512 +0000 UTC m=+6850.394041064" watchObservedRunningTime="2025-11-21 15:26:16.272734937 +0000 UTC m=+6850.394267489" Nov 21 15:26:20 crc kubenswrapper[4904]: I1121 15:26:20.838688 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:20 crc kubenswrapper[4904]: I1121 15:26:20.839594 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:20 crc kubenswrapper[4904]: I1121 15:26:20.898878 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:21 crc kubenswrapper[4904]: I1121 15:26:21.362804 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:21 crc kubenswrapper[4904]: I1121 15:26:21.430811 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmxjp"] Nov 21 15:26:23 crc kubenswrapper[4904]: I1121 15:26:23.375833 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pmxjp" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="registry-server" containerID="cri-o://ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8" gracePeriod=2 Nov 21 15:26:23 crc kubenswrapper[4904]: I1121 15:26:23.982845 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.142146 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-utilities\") pod \"83d65ec9-6929-4769-89fd-166a9ffe8da4\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.142396 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmcjk\" (UniqueName: \"kubernetes.io/projected/83d65ec9-6929-4769-89fd-166a9ffe8da4-kube-api-access-jmcjk\") pod \"83d65ec9-6929-4769-89fd-166a9ffe8da4\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.142594 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-catalog-content\") pod \"83d65ec9-6929-4769-89fd-166a9ffe8da4\" (UID: \"83d65ec9-6929-4769-89fd-166a9ffe8da4\") " Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.145372 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-utilities" (OuterVolumeSpecName: "utilities") pod "83d65ec9-6929-4769-89fd-166a9ffe8da4" (UID: "83d65ec9-6929-4769-89fd-166a9ffe8da4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.153011 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d65ec9-6929-4769-89fd-166a9ffe8da4-kube-api-access-jmcjk" (OuterVolumeSpecName: "kube-api-access-jmcjk") pod "83d65ec9-6929-4769-89fd-166a9ffe8da4" (UID: "83d65ec9-6929-4769-89fd-166a9ffe8da4"). InnerVolumeSpecName "kube-api-access-jmcjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.163023 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d65ec9-6929-4769-89fd-166a9ffe8da4" (UID: "83d65ec9-6929-4769-89fd-166a9ffe8da4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.247351 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.247388 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmcjk\" (UniqueName: \"kubernetes.io/projected/83d65ec9-6929-4769-89fd-166a9ffe8da4-kube-api-access-jmcjk\") on node \"crc\" DevicePath \"\"" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.247399 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d65ec9-6929-4769-89fd-166a9ffe8da4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.418967 4904 generic.go:334] "Generic (PLEG): container finished" podID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerID="ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8" exitCode=0 Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.419000 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pmxjp" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.419016 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerDied","Data":"ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8"} Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.421493 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pmxjp" event={"ID":"83d65ec9-6929-4769-89fd-166a9ffe8da4","Type":"ContainerDied","Data":"e6d42ae8f4926db13112ed6cebab475b05d7e8a3738d1463e6c38997cc31650e"} Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.421549 4904 scope.go:117] "RemoveContainer" containerID="ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.455719 4904 scope.go:117] "RemoveContainer" containerID="e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.497644 4904 scope.go:117] "RemoveContainer" containerID="b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.547271 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmxjp"] Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.555167 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pmxjp"] Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.562243 4904 scope.go:117] "RemoveContainer" containerID="ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8" Nov 21 15:26:24 crc kubenswrapper[4904]: E1121 15:26:24.567733 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8\": container with ID starting with ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8 not found: ID does not exist" containerID="ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.567824 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8"} err="failed to get container status \"ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8\": rpc error: code = NotFound desc = could not find container \"ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8\": container with ID starting with ef90bd380b9f2b93f2dc577b6aef79b283773f1301601bb82cdf981dff98edb8 not found: ID does not exist" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.567871 4904 scope.go:117] "RemoveContainer" containerID="e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f" Nov 21 15:26:24 crc kubenswrapper[4904]: E1121 15:26:24.574464 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f\": container with ID starting with e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f not found: ID does not exist" containerID="e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.574528 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f"} err="failed to get container status \"e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f\": rpc error: code = NotFound desc = could not find container \"e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f\": container with ID starting with e83924d128a5afc3935ac7f293e3074ee7352dfbf982949882cf9f1de932634f not found: ID does not exist" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.574563 4904 scope.go:117] "RemoveContainer" containerID="b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922" Nov 21 15:26:24 crc kubenswrapper[4904]: E1121 15:26:24.581914 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922\": container with ID starting with b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922 not found: ID does not exist" containerID="b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922" Nov 21 15:26:24 crc kubenswrapper[4904]: I1121 15:26:24.581982 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922"} err="failed to get container status \"b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922\": rpc error: code = NotFound desc = could not find container \"b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922\": container with ID starting with b9a04f0cebb2ba34bb4d3ed73c635f1c43920e516c38f5bfbf2f738890dcf922 not found: ID does not exist" Nov 21 15:26:26 crc kubenswrapper[4904]: I1121 15:26:26.526389 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" path="/var/lib/kubelet/pods/83d65ec9-6929-4769-89fd-166a9ffe8da4/volumes" Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.113969 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.114079 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.114160 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.115624 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efc849038e17108e704a37f20b69c5f17e36055460897e411ab175e6ad4b7a29"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.115713 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://efc849038e17108e704a37f20b69c5f17e36055460897e411ab175e6ad4b7a29" gracePeriod=600 Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.483213 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="efc849038e17108e704a37f20b69c5f17e36055460897e411ab175e6ad4b7a29" exitCode=0 Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.483262 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"efc849038e17108e704a37f20b69c5f17e36055460897e411ab175e6ad4b7a29"} Nov 21 15:26:28 crc kubenswrapper[4904]: I1121 15:26:28.483299 4904 scope.go:117] "RemoveContainer" containerID="bc8400c5800dd3ef018782b9de8c5120ec11462d2f3e52c0aeeadcbb20a7bcdb" Nov 21 15:26:29 crc kubenswrapper[4904]: I1121 15:26:29.514077 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d"} Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.416854 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zphbl"] Nov 21 15:26:55 crc kubenswrapper[4904]: E1121 15:26:55.417909 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="extract-content" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.417922 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="extract-content" Nov 21 15:26:55 crc kubenswrapper[4904]: E1121 15:26:55.417943 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="extract-utilities" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.417949 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="extract-utilities" Nov 21 15:26:55 crc kubenswrapper[4904]: E1121 15:26:55.417960 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="registry-server" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.417967 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="registry-server" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.418187 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d65ec9-6929-4769-89fd-166a9ffe8da4" containerName="registry-server" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.426775 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.465406 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zphbl"] Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.484700 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d426l\" (UniqueName: \"kubernetes.io/projected/128d4ba2-daea-4930-9293-c46892a80040-kube-api-access-d426l\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.484772 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-utilities\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.484893 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-catalog-content\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.587091 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d426l\" (UniqueName: \"kubernetes.io/projected/128d4ba2-daea-4930-9293-c46892a80040-kube-api-access-d426l\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.587184 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-utilities\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.587266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-catalog-content\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.587909 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-catalog-content\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.588097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-utilities\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.608561 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d426l\" (UniqueName: \"kubernetes.io/projected/128d4ba2-daea-4930-9293-c46892a80040-kube-api-access-d426l\") pod \"redhat-operators-zphbl\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:55 crc kubenswrapper[4904]: I1121 15:26:55.755970 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:26:56 crc kubenswrapper[4904]: I1121 15:26:56.349242 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zphbl"] Nov 21 15:26:56 crc kubenswrapper[4904]: I1121 15:26:56.832425 4904 generic.go:334] "Generic (PLEG): container finished" podID="128d4ba2-daea-4930-9293-c46892a80040" containerID="fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389" exitCode=0 Nov 21 15:26:56 crc kubenswrapper[4904]: I1121 15:26:56.832534 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerDied","Data":"fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389"} Nov 21 15:26:56 crc kubenswrapper[4904]: I1121 15:26:56.833632 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerStarted","Data":"200b3741c1c47b8caf72aec92b3a40f94a438685a577881280e1e1e783325764"} Nov 21 15:26:58 crc kubenswrapper[4904]: I1121 15:26:58.853328 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerStarted","Data":"fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957"} Nov 21 15:27:03 crc kubenswrapper[4904]: I1121 15:27:03.907999 4904 generic.go:334] "Generic (PLEG): container finished" podID="128d4ba2-daea-4930-9293-c46892a80040" containerID="fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957" exitCode=0 Nov 21 15:27:03 crc kubenswrapper[4904]: I1121 15:27:03.908069 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerDied","Data":"fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957"} Nov 21 15:27:04 crc kubenswrapper[4904]: I1121 15:27:04.927599 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerStarted","Data":"efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e"} Nov 21 15:27:04 crc kubenswrapper[4904]: I1121 15:27:04.956638 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zphbl" podStartSLOduration=2.505286953 podStartE2EDuration="9.956617247s" podCreationTimestamp="2025-11-21 15:26:55 +0000 UTC" firstStartedPulling="2025-11-21 15:26:56.834845722 +0000 UTC m=+6890.956378274" lastFinishedPulling="2025-11-21 15:27:04.286176016 +0000 UTC m=+6898.407708568" observedRunningTime="2025-11-21 15:27:04.951048212 +0000 UTC m=+6899.072580774" watchObservedRunningTime="2025-11-21 15:27:04.956617247 +0000 UTC m=+6899.078149799" Nov 21 15:27:05 crc kubenswrapper[4904]: I1121 15:27:05.756360 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:27:05 crc kubenswrapper[4904]: I1121 15:27:05.756417 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:27:06 crc kubenswrapper[4904]: I1121 15:27:06.811343 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zphbl" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="registry-server" probeResult="failure" output=< Nov 21 15:27:06 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:27:06 crc kubenswrapper[4904]: > Nov 21 15:27:15 crc kubenswrapper[4904]: I1121 15:27:15.814252 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:27:15 crc kubenswrapper[4904]: I1121 15:27:15.891218 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:27:16 crc kubenswrapper[4904]: I1121 15:27:16.060973 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zphbl"] Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.062202 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zphbl" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="registry-server" containerID="cri-o://efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e" gracePeriod=2 Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.707471 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.862433 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-utilities\") pod \"128d4ba2-daea-4930-9293-c46892a80040\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.862762 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d426l\" (UniqueName: \"kubernetes.io/projected/128d4ba2-daea-4930-9293-c46892a80040-kube-api-access-d426l\") pod \"128d4ba2-daea-4930-9293-c46892a80040\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.862846 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-catalog-content\") pod \"128d4ba2-daea-4930-9293-c46892a80040\" (UID: \"128d4ba2-daea-4930-9293-c46892a80040\") " Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.863646 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-utilities" (OuterVolumeSpecName: "utilities") pod "128d4ba2-daea-4930-9293-c46892a80040" (UID: "128d4ba2-daea-4930-9293-c46892a80040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.871338 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128d4ba2-daea-4930-9293-c46892a80040-kube-api-access-d426l" (OuterVolumeSpecName: "kube-api-access-d426l") pod "128d4ba2-daea-4930-9293-c46892a80040" (UID: "128d4ba2-daea-4930-9293-c46892a80040"). InnerVolumeSpecName "kube-api-access-d426l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.872240 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d426l\" (UniqueName: \"kubernetes.io/projected/128d4ba2-daea-4930-9293-c46892a80040-kube-api-access-d426l\") on node \"crc\" DevicePath \"\"" Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.872306 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:27:17 crc kubenswrapper[4904]: I1121 15:27:17.975603 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "128d4ba2-daea-4930-9293-c46892a80040" (UID: "128d4ba2-daea-4930-9293-c46892a80040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.073231 4904 generic.go:334] "Generic (PLEG): container finished" podID="128d4ba2-daea-4930-9293-c46892a80040" containerID="efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e" exitCode=0 Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.073299 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerDied","Data":"efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e"} Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.073336 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zphbl" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.074366 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zphbl" event={"ID":"128d4ba2-daea-4930-9293-c46892a80040","Type":"ContainerDied","Data":"200b3741c1c47b8caf72aec92b3a40f94a438685a577881280e1e1e783325764"} Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.074452 4904 scope.go:117] "RemoveContainer" containerID="efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.076229 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/128d4ba2-daea-4930-9293-c46892a80040-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.096869 4904 scope.go:117] "RemoveContainer" containerID="fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.118246 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zphbl"] Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.128175 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zphbl"] Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.139595 4904 scope.go:117] "RemoveContainer" containerID="fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.179241 4904 scope.go:117] "RemoveContainer" containerID="efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e" Nov 21 15:27:18 crc kubenswrapper[4904]: E1121 15:27:18.179897 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e\": container with ID starting with efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e not found: ID does not exist" containerID="efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.179938 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e"} err="failed to get container status \"efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e\": rpc error: code = NotFound desc = could not find container \"efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e\": container with ID starting with efbecbcce0195fbb18a6305d839f3e0ba07c1b814b3eb7adc9a923a1e39edc9e not found: ID does not exist" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.179963 4904 scope.go:117] "RemoveContainer" containerID="fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957" Nov 21 15:27:18 crc kubenswrapper[4904]: E1121 15:27:18.180289 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957\": container with ID starting with fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957 not found: ID does not exist" containerID="fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.180321 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957"} err="failed to get container status \"fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957\": rpc error: code = NotFound desc = could not find container \"fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957\": container with ID starting with fe8e667b1878ecf958aba66f7d65a91124cb069e42dec58b6f3ef4eaaef7b957 not found: ID does not exist" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.180341 4904 scope.go:117] "RemoveContainer" containerID="fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389" Nov 21 15:27:18 crc kubenswrapper[4904]: E1121 15:27:18.180973 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389\": container with ID starting with fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389 not found: ID does not exist" containerID="fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.181002 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389"} err="failed to get container status \"fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389\": rpc error: code = NotFound desc = could not find container \"fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389\": container with ID starting with fd9f263eacb2e5194ce4921b8ced4343dc8703329081496ff5af29fd90420389 not found: ID does not exist" Nov 21 15:27:18 crc kubenswrapper[4904]: I1121 15:27:18.526517 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128d4ba2-daea-4930-9293-c46892a80040" path="/var/lib/kubelet/pods/128d4ba2-daea-4930-9293-c46892a80040/volumes" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.638049 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 15:27:52 crc kubenswrapper[4904]: E1121 15:27:52.639112 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="extract-utilities" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.639130 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="extract-utilities" Nov 21 15:27:52 crc kubenswrapper[4904]: E1121 15:27:52.639146 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="extract-content" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.639154 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="extract-content" Nov 21 15:27:52 crc kubenswrapper[4904]: E1121 15:27:52.639186 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="registry-server" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.639192 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="registry-server" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.639424 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="128d4ba2-daea-4930-9293-c46892a80040" containerName="registry-server" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.640247 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.645973 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.646253 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zgzr7" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.647149 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.647443 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.652340 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672229 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672300 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hslnx\" (UniqueName: \"kubernetes.io/projected/24d17dbe-f722-4f81-b271-ca1e191780a8-kube-api-access-hslnx\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672367 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-config-data\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672432 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672495 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672530 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672596 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672748 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.672785 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.776887 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-config-data\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.776976 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-config-data\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.777112 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.777253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.777679 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.777725 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.777831 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.778188 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.778640 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.778717 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.778761 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.778808 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hslnx\" (UniqueName: \"kubernetes.io/projected/24d17dbe-f722-4f81-b271-ca1e191780a8-kube-api-access-hslnx\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.778963 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.781013 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.783889 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.783936 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.785562 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.795633 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hslnx\" (UniqueName: \"kubernetes.io/projected/24d17dbe-f722-4f81-b271-ca1e191780a8-kube-api-access-hslnx\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.815512 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " pod="openstack/tempest-tests-tempest" Nov 21 15:27:52 crc kubenswrapper[4904]: I1121 15:27:52.976535 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 15:27:53 crc kubenswrapper[4904]: I1121 15:27:53.497212 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 21 15:27:54 crc kubenswrapper[4904]: I1121 15:27:54.501611 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"24d17dbe-f722-4f81-b271-ca1e191780a8","Type":"ContainerStarted","Data":"ff4795d0c74139c7a107bdb700485bee2760678bac8d7a0b29750ef76216585a"} Nov 21 15:28:28 crc kubenswrapper[4904]: I1121 15:28:28.113760 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:28:28 crc kubenswrapper[4904]: I1121 15:28:28.114307 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:28:30 crc kubenswrapper[4904]: E1121 15:28:30.163758 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 21 15:28:30 crc kubenswrapper[4904]: E1121 15:28:30.168230 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hslnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(24d17dbe-f722-4f81-b271-ca1e191780a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 15:28:30 crc kubenswrapper[4904]: E1121 15:28:30.169587 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="24d17dbe-f722-4f81-b271-ca1e191780a8" Nov 21 15:28:30 crc kubenswrapper[4904]: E1121 15:28:30.960198 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="24d17dbe-f722-4f81-b271-ca1e191780a8" Nov 21 15:28:37 crc kubenswrapper[4904]: I1121 15:28:37.265925 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" podUID="e819c802-1c71-4668-bc99-5b41cc11c656" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 21 15:28:42 crc kubenswrapper[4904]: I1121 15:28:42.520207 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:28:43 crc kubenswrapper[4904]: I1121 15:28:43.161565 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 21 15:28:45 crc kubenswrapper[4904]: I1121 15:28:45.114498 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"24d17dbe-f722-4f81-b271-ca1e191780a8","Type":"ContainerStarted","Data":"c3a6be066c8f0a59279ebc2465d911fa357941462dceae14a52854bcd474292b"} Nov 21 15:28:45 crc kubenswrapper[4904]: I1121 15:28:45.139590 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.48989383 podStartE2EDuration="54.139572718s" podCreationTimestamp="2025-11-21 15:27:51 +0000 UTC" firstStartedPulling="2025-11-21 15:27:53.508668984 +0000 UTC m=+6947.630201536" lastFinishedPulling="2025-11-21 15:28:43.158347872 +0000 UTC m=+6997.279880424" observedRunningTime="2025-11-21 15:28:45.1351323 +0000 UTC m=+6999.256664852" watchObservedRunningTime="2025-11-21 15:28:45.139572718 +0000 UTC m=+6999.261105270" Nov 21 15:28:58 crc kubenswrapper[4904]: I1121 15:28:58.114246 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:28:58 crc kubenswrapper[4904]: I1121 15:28:58.114840 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.689144 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2bdm"] Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.692240 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.766754 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2bdm"] Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.851145 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-utilities\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.851369 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-catalog-content\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.851551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2qp\" (UniqueName: \"kubernetes.io/projected/f79b6cf8-6553-493a-964c-ebb8c2d15989-kube-api-access-9k2qp\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.953906 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-utilities\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.954011 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-catalog-content\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.954069 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k2qp\" (UniqueName: \"kubernetes.io/projected/f79b6cf8-6553-493a-964c-ebb8c2d15989-kube-api-access-9k2qp\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.954924 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-utilities\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.955799 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-catalog-content\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:26 crc kubenswrapper[4904]: I1121 15:29:26.981056 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k2qp\" (UniqueName: \"kubernetes.io/projected/f79b6cf8-6553-493a-964c-ebb8c2d15989-kube-api-access-9k2qp\") pod \"community-operators-w2bdm\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:27 crc kubenswrapper[4904]: I1121 15:29:27.020407 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.116564 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.117704 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.117771 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.118974 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.119039 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" gracePeriod=600 Nov 21 15:29:28 crc kubenswrapper[4904]: E1121 15:29:28.264463 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.409433 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2bdm"] Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.779150 4904 generic.go:334] "Generic (PLEG): container finished" podID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerID="59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5" exitCode=0 Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.779268 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerDied","Data":"59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5"} Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.779333 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerStarted","Data":"0a566a12d80787592008b6e723d4e016e8ca76e18ff70fffe9447d38aac6d2fc"} Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.796899 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" exitCode=0 Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.796961 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d"} Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.797025 4904 scope.go:117] "RemoveContainer" containerID="efc849038e17108e704a37f20b69c5f17e36055460897e411ab175e6ad4b7a29" Nov 21 15:29:28 crc kubenswrapper[4904]: I1121 15:29:28.797991 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:29:28 crc kubenswrapper[4904]: E1121 15:29:28.798597 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:29:29 crc kubenswrapper[4904]: I1121 15:29:29.813435 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerStarted","Data":"602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56"} Nov 21 15:29:32 crc kubenswrapper[4904]: I1121 15:29:32.857019 4904 generic.go:334] "Generic (PLEG): container finished" podID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerID="602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56" exitCode=0 Nov 21 15:29:32 crc kubenswrapper[4904]: I1121 15:29:32.857114 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerDied","Data":"602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56"} Nov 21 15:29:33 crc kubenswrapper[4904]: I1121 15:29:33.872941 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerStarted","Data":"fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276"} Nov 21 15:29:33 crc kubenswrapper[4904]: I1121 15:29:33.893546 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2bdm" podStartSLOduration=3.39868372 podStartE2EDuration="7.893527138s" podCreationTimestamp="2025-11-21 15:29:26 +0000 UTC" firstStartedPulling="2025-11-21 15:29:28.781741109 +0000 UTC m=+7042.903273651" lastFinishedPulling="2025-11-21 15:29:33.276584517 +0000 UTC m=+7047.398117069" observedRunningTime="2025-11-21 15:29:33.888378664 +0000 UTC m=+7048.009911216" watchObservedRunningTime="2025-11-21 15:29:33.893527138 +0000 UTC m=+7048.015059690" Nov 21 15:29:37 crc kubenswrapper[4904]: I1121 15:29:37.020761 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:37 crc kubenswrapper[4904]: I1121 15:29:37.021806 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:38 crc kubenswrapper[4904]: I1121 15:29:38.078839 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w2bdm" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="registry-server" probeResult="failure" output=< Nov 21 15:29:38 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:29:38 crc kubenswrapper[4904]: > Nov 21 15:29:44 crc kubenswrapper[4904]: I1121 15:29:44.513535 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:29:44 crc kubenswrapper[4904]: E1121 15:29:44.515731 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:29:47 crc kubenswrapper[4904]: I1121 15:29:47.079500 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:47 crc kubenswrapper[4904]: I1121 15:29:47.149165 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:47 crc kubenswrapper[4904]: I1121 15:29:47.325558 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2bdm"] Nov 21 15:29:49 crc kubenswrapper[4904]: I1121 15:29:49.035960 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2bdm" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="registry-server" containerID="cri-o://fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276" gracePeriod=2 Nov 21 15:29:49 crc kubenswrapper[4904]: I1121 15:29:49.815911 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:49 crc kubenswrapper[4904]: I1121 15:29:49.994248 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-utilities\") pod \"f79b6cf8-6553-493a-964c-ebb8c2d15989\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " Nov 21 15:29:49 crc kubenswrapper[4904]: I1121 15:29:49.994510 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-catalog-content\") pod \"f79b6cf8-6553-493a-964c-ebb8c2d15989\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " Nov 21 15:29:49 crc kubenswrapper[4904]: I1121 15:29:49.994703 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k2qp\" (UniqueName: \"kubernetes.io/projected/f79b6cf8-6553-493a-964c-ebb8c2d15989-kube-api-access-9k2qp\") pod \"f79b6cf8-6553-493a-964c-ebb8c2d15989\" (UID: \"f79b6cf8-6553-493a-964c-ebb8c2d15989\") " Nov 21 15:29:49 crc kubenswrapper[4904]: I1121 15:29:49.996517 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-utilities" (OuterVolumeSpecName: "utilities") pod "f79b6cf8-6553-493a-964c-ebb8c2d15989" (UID: "f79b6cf8-6553-493a-964c-ebb8c2d15989"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.027884 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79b6cf8-6553-493a-964c-ebb8c2d15989-kube-api-access-9k2qp" (OuterVolumeSpecName: "kube-api-access-9k2qp") pod "f79b6cf8-6553-493a-964c-ebb8c2d15989" (UID: "f79b6cf8-6553-493a-964c-ebb8c2d15989"). InnerVolumeSpecName "kube-api-access-9k2qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.072326 4904 generic.go:334] "Generic (PLEG): container finished" podID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerID="fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276" exitCode=0 Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.072382 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerDied","Data":"fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276"} Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.072423 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2bdm" event={"ID":"f79b6cf8-6553-493a-964c-ebb8c2d15989","Type":"ContainerDied","Data":"0a566a12d80787592008b6e723d4e016e8ca76e18ff70fffe9447d38aac6d2fc"} Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.072446 4904 scope.go:117] "RemoveContainer" containerID="fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.072593 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2bdm" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.097211 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f79b6cf8-6553-493a-964c-ebb8c2d15989" (UID: "f79b6cf8-6553-493a-964c-ebb8c2d15989"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.098253 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k2qp\" (UniqueName: \"kubernetes.io/projected/f79b6cf8-6553-493a-964c-ebb8c2d15989-kube-api-access-9k2qp\") on node \"crc\" DevicePath \"\"" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.098294 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.098306 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f79b6cf8-6553-493a-964c-ebb8c2d15989-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.104621 4904 scope.go:117] "RemoveContainer" containerID="602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.129890 4904 scope.go:117] "RemoveContainer" containerID="59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.200988 4904 scope.go:117] "RemoveContainer" containerID="fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276" Nov 21 15:29:50 crc kubenswrapper[4904]: E1121 15:29:50.201459 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276\": container with ID starting with fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276 not found: ID does not exist" containerID="fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.202320 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276"} err="failed to get container status \"fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276\": rpc error: code = NotFound desc = could not find container \"fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276\": container with ID starting with fb762ef5d704cb6cad9f1f26c29bbbd131705e433d83df2a235ea9ea406d5276 not found: ID does not exist" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.202362 4904 scope.go:117] "RemoveContainer" containerID="602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56" Nov 21 15:29:50 crc kubenswrapper[4904]: E1121 15:29:50.202644 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56\": container with ID starting with 602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56 not found: ID does not exist" containerID="602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.202696 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56"} err="failed to get container status \"602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56\": rpc error: code = NotFound desc = could not find container \"602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56\": container with ID starting with 602afa0a322fdf69f4aab88a57d097a6b4d641f322d27c67c33468ecbdd67a56 not found: ID does not exist" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.202716 4904 scope.go:117] "RemoveContainer" containerID="59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5" Nov 21 15:29:50 crc kubenswrapper[4904]: E1121 15:29:50.202926 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5\": container with ID starting with 59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5 not found: ID does not exist" containerID="59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.203154 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5"} err="failed to get container status \"59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5\": rpc error: code = NotFound desc = could not find container \"59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5\": container with ID starting with 59a3cb0baa6d0142fcc04257f6d5076cc872176d81b7d7242cfdba83135e25c5 not found: ID does not exist" Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.416997 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2bdm"] Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.429160 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2bdm"] Nov 21 15:29:50 crc kubenswrapper[4904]: I1121 15:29:50.526974 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" path="/var/lib/kubelet/pods/f79b6cf8-6553-493a-964c-ebb8c2d15989/volumes" Nov 21 15:29:56 crc kubenswrapper[4904]: I1121 15:29:56.522007 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:29:56 crc kubenswrapper[4904]: E1121 15:29:56.523011 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.190563 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94"] Nov 21 15:30:00 crc kubenswrapper[4904]: E1121 15:30:00.191689 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="extract-content" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.191716 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="extract-content" Nov 21 15:30:00 crc kubenswrapper[4904]: E1121 15:30:00.191734 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="extract-utilities" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.191742 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="extract-utilities" Nov 21 15:30:00 crc kubenswrapper[4904]: E1121 15:30:00.191786 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="registry-server" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.191792 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="registry-server" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.192023 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f79b6cf8-6553-493a-964c-ebb8c2d15989" containerName="registry-server" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.192873 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.196148 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.197152 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.203756 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94"] Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.356553 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-config-volume\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.356741 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-secret-volume\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.356767 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkr57\" (UniqueName: \"kubernetes.io/projected/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-kube-api-access-gkr57\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.459159 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-config-volume\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.459273 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-secret-volume\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.459293 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkr57\" (UniqueName: \"kubernetes.io/projected/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-kube-api-access-gkr57\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.460309 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-config-volume\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.478587 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-secret-volume\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.479069 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkr57\" (UniqueName: \"kubernetes.io/projected/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-kube-api-access-gkr57\") pod \"collect-profiles-29395650-tjw94\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:00 crc kubenswrapper[4904]: I1121 15:30:00.517615 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:01 crc kubenswrapper[4904]: I1121 15:30:01.099600 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94"] Nov 21 15:30:01 crc kubenswrapper[4904]: I1121 15:30:01.189122 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" event={"ID":"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8","Type":"ContainerStarted","Data":"bada7f47f3495b545fb290a284eefb0fa698acd8707486d88659b332c803c9f4"} Nov 21 15:30:02 crc kubenswrapper[4904]: I1121 15:30:02.201394 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" event={"ID":"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8","Type":"ContainerStarted","Data":"d591cbd17a425269b0da2e38e7022909f47a7f0074f095d4e7edc832c46b29e6"} Nov 21 15:30:02 crc kubenswrapper[4904]: I1121 15:30:02.221550 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" podStartSLOduration=2.221531913 podStartE2EDuration="2.221531913s" podCreationTimestamp="2025-11-21 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:30:02.216346988 +0000 UTC m=+7076.337879560" watchObservedRunningTime="2025-11-21 15:30:02.221531913 +0000 UTC m=+7076.343064465" Nov 21 15:30:03 crc kubenswrapper[4904]: I1121 15:30:03.215798 4904 generic.go:334] "Generic (PLEG): container finished" podID="ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" containerID="d591cbd17a425269b0da2e38e7022909f47a7f0074f095d4e7edc832c46b29e6" exitCode=0 Nov 21 15:30:03 crc kubenswrapper[4904]: I1121 15:30:03.215890 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" event={"ID":"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8","Type":"ContainerDied","Data":"d591cbd17a425269b0da2e38e7022909f47a7f0074f095d4e7edc832c46b29e6"} Nov 21 15:30:04 crc kubenswrapper[4904]: I1121 15:30:04.816423 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:04 crc kubenswrapper[4904]: I1121 15:30:04.982136 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-config-volume\") pod \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " Nov 21 15:30:04 crc kubenswrapper[4904]: I1121 15:30:04.982216 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-secret-volume\") pod \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " Nov 21 15:30:04 crc kubenswrapper[4904]: I1121 15:30:04.982473 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkr57\" (UniqueName: \"kubernetes.io/projected/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-kube-api-access-gkr57\") pod \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\" (UID: \"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8\") " Nov 21 15:30:04 crc kubenswrapper[4904]: I1121 15:30:04.992029 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-config-volume" (OuterVolumeSpecName: "config-volume") pod "ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" (UID: "ecc47cd9-fd7e-44fd-9af6-41e34c400ff8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.000210 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" (UID: "ecc47cd9-fd7e-44fd-9af6-41e34c400ff8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.001281 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-kube-api-access-gkr57" (OuterVolumeSpecName: "kube-api-access-gkr57") pod "ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" (UID: "ecc47cd9-fd7e-44fd-9af6-41e34c400ff8"). InnerVolumeSpecName "kube-api-access-gkr57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.085279 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.085311 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.085321 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkr57\" (UniqueName: \"kubernetes.io/projected/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8-kube-api-access-gkr57\") on node \"crc\" DevicePath \"\"" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.239613 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" event={"ID":"ecc47cd9-fd7e-44fd-9af6-41e34c400ff8","Type":"ContainerDied","Data":"bada7f47f3495b545fb290a284eefb0fa698acd8707486d88659b332c803c9f4"} Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.239673 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bada7f47f3495b545fb290a284eefb0fa698acd8707486d88659b332c803c9f4" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.239672 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94" Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.365048 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr"] Nov 21 15:30:05 crc kubenswrapper[4904]: I1121 15:30:05.375420 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395605-9dsgr"] Nov 21 15:30:06 crc kubenswrapper[4904]: I1121 15:30:06.546513 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c9eb7f-cc64-4b66-975d-44cba0c94c78" path="/var/lib/kubelet/pods/e2c9eb7f-cc64-4b66-975d-44cba0c94c78/volumes" Nov 21 15:30:09 crc kubenswrapper[4904]: I1121 15:30:09.515687 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:30:09 crc kubenswrapper[4904]: E1121 15:30:09.516526 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:30:22 crc kubenswrapper[4904]: I1121 15:30:22.514092 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:30:22 crc kubenswrapper[4904]: E1121 15:30:22.515457 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:30:34 crc kubenswrapper[4904]: I1121 15:30:34.838602 4904 scope.go:117] "RemoveContainer" containerID="7571c8fbde2659fa72a90ed5b3d023e0ab441740c41f5a2dbf8c4f5d875c0c5f" Nov 21 15:30:35 crc kubenswrapper[4904]: I1121 15:30:35.513324 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:30:35 crc kubenswrapper[4904]: E1121 15:30:35.514103 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:30:49 crc kubenswrapper[4904]: I1121 15:30:49.518880 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:30:49 crc kubenswrapper[4904]: E1121 15:30:49.521834 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:31:01 crc kubenswrapper[4904]: I1121 15:31:01.513453 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:31:01 crc kubenswrapper[4904]: E1121 15:31:01.515509 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:31:16 crc kubenswrapper[4904]: I1121 15:31:16.523145 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:31:16 crc kubenswrapper[4904]: E1121 15:31:16.524458 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:31:29 crc kubenswrapper[4904]: I1121 15:31:29.514005 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:31:29 crc kubenswrapper[4904]: E1121 15:31:29.515924 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:31:44 crc kubenswrapper[4904]: I1121 15:31:44.791710 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:31:44 crc kubenswrapper[4904]: E1121 15:31:44.792847 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:31:59 crc kubenswrapper[4904]: I1121 15:31:59.512986 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:31:59 crc kubenswrapper[4904]: E1121 15:31:59.514270 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:32:13 crc kubenswrapper[4904]: I1121 15:32:13.513363 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:32:13 crc kubenswrapper[4904]: E1121 15:32:13.514359 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:32:27 crc kubenswrapper[4904]: I1121 15:32:27.513909 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:32:27 crc kubenswrapper[4904]: E1121 15:32:27.514735 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:32:41 crc kubenswrapper[4904]: I1121 15:32:41.513848 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:32:41 crc kubenswrapper[4904]: E1121 15:32:41.514614 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:32:55 crc kubenswrapper[4904]: I1121 15:32:55.513566 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:32:55 crc kubenswrapper[4904]: E1121 15:32:55.514585 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:33:10 crc kubenswrapper[4904]: I1121 15:33:10.512914 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:33:10 crc kubenswrapper[4904]: E1121 15:33:10.513632 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:33:25 crc kubenswrapper[4904]: I1121 15:33:25.514401 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:33:25 crc kubenswrapper[4904]: E1121 15:33:25.515364 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:33:38 crc kubenswrapper[4904]: I1121 15:33:38.518630 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:33:38 crc kubenswrapper[4904]: E1121 15:33:38.519350 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:33:53 crc kubenswrapper[4904]: I1121 15:33:53.514269 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:33:53 crc kubenswrapper[4904]: E1121 15:33:53.515014 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:34:08 crc kubenswrapper[4904]: I1121 15:34:08.513957 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:34:08 crc kubenswrapper[4904]: E1121 15:34:08.514631 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:34:23 crc kubenswrapper[4904]: I1121 15:34:23.513745 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:34:23 crc kubenswrapper[4904]: E1121 15:34:23.514486 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:34:34 crc kubenswrapper[4904]: I1121 15:34:34.513299 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:34:35 crc kubenswrapper[4904]: I1121 15:34:35.339361 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"538435c29f353111f94e4953608328b746bcfae47a9d1cfbcfdcda2cf63beb24"} Nov 21 15:36:58 crc kubenswrapper[4904]: I1121 15:36:58.115995 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:36:58 crc kubenswrapper[4904]: I1121 15:36:58.120511 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.318943 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qfw7k"] Nov 21 15:37:18 crc kubenswrapper[4904]: E1121 15:37:18.325341 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" containerName="collect-profiles" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.325372 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" containerName="collect-profiles" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.327420 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" containerName="collect-profiles" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.338600 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.362932 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfw7k"] Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.530114 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-utilities\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.530313 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-catalog-content\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.531212 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjr7\" (UniqueName: \"kubernetes.io/projected/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-kube-api-access-fcjr7\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.633937 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-catalog-content\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.634114 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjr7\" (UniqueName: \"kubernetes.io/projected/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-kube-api-access-fcjr7\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.634253 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-utilities\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.634994 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-utilities\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.634995 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-catalog-content\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.657781 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjr7\" (UniqueName: \"kubernetes.io/projected/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-kube-api-access-fcjr7\") pod \"redhat-marketplace-qfw7k\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:18 crc kubenswrapper[4904]: I1121 15:37:18.668038 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:19 crc kubenswrapper[4904]: I1121 15:37:19.932228 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfw7k"] Nov 21 15:37:19 crc kubenswrapper[4904]: W1121 15:37:19.950856 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4fadd41_84dd_4ffd_a38f_c4df437b1f36.slice/crio-d6128c21869d0ff3dd105eaeac692ba7c27f3f9f05187bb7a3750d2b9839a454 WatchSource:0}: Error finding container d6128c21869d0ff3dd105eaeac692ba7c27f3f9f05187bb7a3750d2b9839a454: Status 404 returned error can't find the container with id d6128c21869d0ff3dd105eaeac692ba7c27f3f9f05187bb7a3750d2b9839a454 Nov 21 15:37:20 crc kubenswrapper[4904]: I1121 15:37:20.084857 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerStarted","Data":"d6128c21869d0ff3dd105eaeac692ba7c27f3f9f05187bb7a3750d2b9839a454"} Nov 21 15:37:21 crc kubenswrapper[4904]: I1121 15:37:21.098118 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerDied","Data":"4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085"} Nov 21 15:37:21 crc kubenswrapper[4904]: I1121 15:37:21.098222 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerID="4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085" exitCode=0 Nov 21 15:37:21 crc kubenswrapper[4904]: I1121 15:37:21.106951 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:37:22 crc kubenswrapper[4904]: I1121 15:37:22.110356 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerStarted","Data":"5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb"} Nov 21 15:37:23 crc kubenswrapper[4904]: I1121 15:37:23.127003 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerID="5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb" exitCode=0 Nov 21 15:37:23 crc kubenswrapper[4904]: I1121 15:37:23.127061 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerDied","Data":"5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb"} Nov 21 15:37:24 crc kubenswrapper[4904]: I1121 15:37:24.140800 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerStarted","Data":"afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca"} Nov 21 15:37:24 crc kubenswrapper[4904]: I1121 15:37:24.168471 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qfw7k" podStartSLOduration=3.743512872 podStartE2EDuration="6.16844864s" podCreationTimestamp="2025-11-21 15:37:18 +0000 UTC" firstStartedPulling="2025-11-21 15:37:21.100688911 +0000 UTC m=+7515.222221463" lastFinishedPulling="2025-11-21 15:37:23.525624679 +0000 UTC m=+7517.647157231" observedRunningTime="2025-11-21 15:37:24.157759551 +0000 UTC m=+7518.279292113" watchObservedRunningTime="2025-11-21 15:37:24.16844864 +0000 UTC m=+7518.289981202" Nov 21 15:37:28 crc kubenswrapper[4904]: I1121 15:37:28.113719 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:37:28 crc kubenswrapper[4904]: I1121 15:37:28.114274 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:37:28 crc kubenswrapper[4904]: I1121 15:37:28.669553 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:28 crc kubenswrapper[4904]: I1121 15:37:28.669901 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:28 crc kubenswrapper[4904]: I1121 15:37:28.727892 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:29 crc kubenswrapper[4904]: I1121 15:37:29.259520 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:29 crc kubenswrapper[4904]: I1121 15:37:29.312780 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfw7k"] Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.226302 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qfw7k" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="registry-server" containerID="cri-o://afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca" gracePeriod=2 Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.871591 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.985016 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcjr7\" (UniqueName: \"kubernetes.io/projected/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-kube-api-access-fcjr7\") pod \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.985180 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-utilities\") pod \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.985432 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-catalog-content\") pod \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\" (UID: \"e4fadd41-84dd-4ffd-a38f-c4df437b1f36\") " Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.988305 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-utilities" (OuterVolumeSpecName: "utilities") pod "e4fadd41-84dd-4ffd-a38f-c4df437b1f36" (UID: "e4fadd41-84dd-4ffd-a38f-c4df437b1f36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:37:31 crc kubenswrapper[4904]: I1121 15:37:31.995582 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-kube-api-access-fcjr7" (OuterVolumeSpecName: "kube-api-access-fcjr7") pod "e4fadd41-84dd-4ffd-a38f-c4df437b1f36" (UID: "e4fadd41-84dd-4ffd-a38f-c4df437b1f36"). InnerVolumeSpecName "kube-api-access-fcjr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.013952 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4fadd41-84dd-4ffd-a38f-c4df437b1f36" (UID: "e4fadd41-84dd-4ffd-a38f-c4df437b1f36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.087899 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcjr7\" (UniqueName: \"kubernetes.io/projected/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-kube-api-access-fcjr7\") on node \"crc\" DevicePath \"\"" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.087938 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.087954 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fadd41-84dd-4ffd-a38f-c4df437b1f36-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.235571 4904 generic.go:334] "Generic (PLEG): container finished" podID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerID="afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca" exitCode=0 Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.235643 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfw7k" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.235643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerDied","Data":"afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca"} Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.235720 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfw7k" event={"ID":"e4fadd41-84dd-4ffd-a38f-c4df437b1f36","Type":"ContainerDied","Data":"d6128c21869d0ff3dd105eaeac692ba7c27f3f9f05187bb7a3750d2b9839a454"} Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.235742 4904 scope.go:117] "RemoveContainer" containerID="afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.272249 4904 scope.go:117] "RemoveContainer" containerID="5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.283919 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfw7k"] Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.297814 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfw7k"] Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.298918 4904 scope.go:117] "RemoveContainer" containerID="4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.345770 4904 scope.go:117] "RemoveContainer" containerID="afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca" Nov 21 15:37:32 crc kubenswrapper[4904]: E1121 15:37:32.348304 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca\": container with ID starting with afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca not found: ID does not exist" containerID="afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.349067 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca"} err="failed to get container status \"afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca\": rpc error: code = NotFound desc = could not find container \"afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca\": container with ID starting with afdafc31beb2327a93ef751a1c6579a9a3320a3832c86857176f031d4f96c7ca not found: ID does not exist" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.349104 4904 scope.go:117] "RemoveContainer" containerID="5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb" Nov 21 15:37:32 crc kubenswrapper[4904]: E1121 15:37:32.349559 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb\": container with ID starting with 5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb not found: ID does not exist" containerID="5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.349593 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb"} err="failed to get container status \"5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb\": rpc error: code = NotFound desc = could not find container \"5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb\": container with ID starting with 5fc31570e7803c3ac65dc8e134849555138aeee7d50d726387faf17dea47f8eb not found: ID does not exist" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.349622 4904 scope.go:117] "RemoveContainer" containerID="4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085" Nov 21 15:37:32 crc kubenswrapper[4904]: E1121 15:37:32.350011 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085\": container with ID starting with 4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085 not found: ID does not exist" containerID="4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.350050 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085"} err="failed to get container status \"4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085\": rpc error: code = NotFound desc = could not find container \"4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085\": container with ID starting with 4f1178a149882a63375c658c668ce9f06e70c721614323adc61ec97980a2a085 not found: ID does not exist" Nov 21 15:37:32 crc kubenswrapper[4904]: I1121 15:37:32.526994 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" path="/var/lib/kubelet/pods/e4fadd41-84dd-4ffd-a38f-c4df437b1f36/volumes" Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.114084 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.114597 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.114650 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.115615 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"538435c29f353111f94e4953608328b746bcfae47a9d1cfbcfdcda2cf63beb24"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.115698 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://538435c29f353111f94e4953608328b746bcfae47a9d1cfbcfdcda2cf63beb24" gracePeriod=600 Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.528597 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="538435c29f353111f94e4953608328b746bcfae47a9d1cfbcfdcda2cf63beb24" exitCode=0 Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.528684 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"538435c29f353111f94e4953608328b746bcfae47a9d1cfbcfdcda2cf63beb24"} Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.529003 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677"} Nov 21 15:37:58 crc kubenswrapper[4904]: I1121 15:37:58.529032 4904 scope.go:117] "RemoveContainer" containerID="f15806b45baad50a02d95791d25a5952872124182f2bf1aff4377e313590ff8d" Nov 21 15:39:58 crc kubenswrapper[4904]: I1121 15:39:58.113592 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:39:58 crc kubenswrapper[4904]: I1121 15:39:58.115824 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:40:28 crc kubenswrapper[4904]: I1121 15:40:28.113486 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:40:28 crc kubenswrapper[4904]: I1121 15:40:28.114148 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.432202 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25544"] Nov 21 15:40:49 crc kubenswrapper[4904]: E1121 15:40:49.433370 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="registry-server" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.433391 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="registry-server" Nov 21 15:40:49 crc kubenswrapper[4904]: E1121 15:40:49.433437 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="extract-utilities" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.433445 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="extract-utilities" Nov 21 15:40:49 crc kubenswrapper[4904]: E1121 15:40:49.433461 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="extract-content" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.433468 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="extract-content" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.433691 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4fadd41-84dd-4ffd-a38f-c4df437b1f36" containerName="registry-server" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.435730 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.454776 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25544"] Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.502265 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-catalog-content\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.502514 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-utilities\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.502561 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64x9\" (UniqueName: \"kubernetes.io/projected/9809b75f-4806-45d5-9c48-976471838dff-kube-api-access-r64x9\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.605002 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-utilities\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.605124 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-utilities\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.605526 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r64x9\" (UniqueName: \"kubernetes.io/projected/9809b75f-4806-45d5-9c48-976471838dff-kube-api-access-r64x9\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.606049 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-catalog-content\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.607097 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-catalog-content\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.632751 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r64x9\" (UniqueName: \"kubernetes.io/projected/9809b75f-4806-45d5-9c48-976471838dff-kube-api-access-r64x9\") pod \"community-operators-25544\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:49 crc kubenswrapper[4904]: I1121 15:40:49.765666 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:50 crc kubenswrapper[4904]: I1121 15:40:50.506442 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25544"] Nov 21 15:40:51 crc kubenswrapper[4904]: I1121 15:40:51.446127 4904 generic.go:334] "Generic (PLEG): container finished" podID="9809b75f-4806-45d5-9c48-976471838dff" containerID="d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da" exitCode=0 Nov 21 15:40:51 crc kubenswrapper[4904]: I1121 15:40:51.446192 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerDied","Data":"d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da"} Nov 21 15:40:51 crc kubenswrapper[4904]: I1121 15:40:51.446573 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerStarted","Data":"53c148bb4285164bc018ab79398be3e92930fe3fe7bfd8e00024093adf2f5cb9"} Nov 21 15:40:53 crc kubenswrapper[4904]: I1121 15:40:53.474775 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerStarted","Data":"e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f"} Nov 21 15:40:56 crc kubenswrapper[4904]: I1121 15:40:56.516326 4904 generic.go:334] "Generic (PLEG): container finished" podID="9809b75f-4806-45d5-9c48-976471838dff" containerID="e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f" exitCode=0 Nov 21 15:40:56 crc kubenswrapper[4904]: I1121 15:40:56.529245 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerDied","Data":"e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f"} Nov 21 15:40:57 crc kubenswrapper[4904]: I1121 15:40:57.529052 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerStarted","Data":"a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01"} Nov 21 15:40:57 crc kubenswrapper[4904]: I1121 15:40:57.560555 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25544" podStartSLOduration=3.08428929 podStartE2EDuration="8.560535002s" podCreationTimestamp="2025-11-21 15:40:49 +0000 UTC" firstStartedPulling="2025-11-21 15:40:51.450194762 +0000 UTC m=+7725.571727314" lastFinishedPulling="2025-11-21 15:40:56.926440474 +0000 UTC m=+7731.047973026" observedRunningTime="2025-11-21 15:40:57.54681365 +0000 UTC m=+7731.668346202" watchObservedRunningTime="2025-11-21 15:40:57.560535002 +0000 UTC m=+7731.682067554" Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.113806 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.114174 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.114224 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.115173 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.115238 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" gracePeriod=600 Nov 21 15:40:58 crc kubenswrapper[4904]: E1121 15:40:58.243914 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.545204 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" exitCode=0 Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.545252 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677"} Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.545365 4904 scope.go:117] "RemoveContainer" containerID="538435c29f353111f94e4953608328b746bcfae47a9d1cfbcfdcda2cf63beb24" Nov 21 15:40:58 crc kubenswrapper[4904]: I1121 15:40:58.546429 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:40:58 crc kubenswrapper[4904]: E1121 15:40:58.546744 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:40:59 crc kubenswrapper[4904]: I1121 15:40:59.766869 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25544" Nov 21 15:40:59 crc kubenswrapper[4904]: I1121 15:40:59.767898 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25544" Nov 21 15:41:00 crc kubenswrapper[4904]: I1121 15:41:00.836531 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-25544" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="registry-server" probeResult="failure" output=< Nov 21 15:41:00 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:41:00 crc kubenswrapper[4904]: > Nov 21 15:41:09 crc kubenswrapper[4904]: I1121 15:41:09.516756 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:41:09 crc kubenswrapper[4904]: E1121 15:41:09.518709 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:41:09 crc kubenswrapper[4904]: I1121 15:41:09.829515 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25544" Nov 21 15:41:09 crc kubenswrapper[4904]: I1121 15:41:09.899481 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25544" Nov 21 15:41:10 crc kubenswrapper[4904]: I1121 15:41:10.081188 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25544"] Nov 21 15:41:11 crc kubenswrapper[4904]: I1121 15:41:11.738320 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25544" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="registry-server" containerID="cri-o://a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01" gracePeriod=2 Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.287129 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25544" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.487248 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-catalog-content\") pod \"9809b75f-4806-45d5-9c48-976471838dff\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.487572 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64x9\" (UniqueName: \"kubernetes.io/projected/9809b75f-4806-45d5-9c48-976471838dff-kube-api-access-r64x9\") pod \"9809b75f-4806-45d5-9c48-976471838dff\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.487920 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-utilities\") pod \"9809b75f-4806-45d5-9c48-976471838dff\" (UID: \"9809b75f-4806-45d5-9c48-976471838dff\") " Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.488800 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-utilities" (OuterVolumeSpecName: "utilities") pod "9809b75f-4806-45d5-9c48-976471838dff" (UID: "9809b75f-4806-45d5-9c48-976471838dff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.489748 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.501945 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9809b75f-4806-45d5-9c48-976471838dff-kube-api-access-r64x9" (OuterVolumeSpecName: "kube-api-access-r64x9") pod "9809b75f-4806-45d5-9c48-976471838dff" (UID: "9809b75f-4806-45d5-9c48-976471838dff"). InnerVolumeSpecName "kube-api-access-r64x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.558527 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9809b75f-4806-45d5-9c48-976471838dff" (UID: "9809b75f-4806-45d5-9c48-976471838dff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.592886 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9809b75f-4806-45d5-9c48-976471838dff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.592934 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r64x9\" (UniqueName: \"kubernetes.io/projected/9809b75f-4806-45d5-9c48-976471838dff-kube-api-access-r64x9\") on node \"crc\" DevicePath \"\"" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.751315 4904 generic.go:334] "Generic (PLEG): container finished" podID="9809b75f-4806-45d5-9c48-976471838dff" containerID="a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01" exitCode=0 Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.751369 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerDied","Data":"a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01"} Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.751421 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25544" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.751430 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25544" event={"ID":"9809b75f-4806-45d5-9c48-976471838dff","Type":"ContainerDied","Data":"53c148bb4285164bc018ab79398be3e92930fe3fe7bfd8e00024093adf2f5cb9"} Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.751456 4904 scope.go:117] "RemoveContainer" containerID="a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.797779 4904 scope.go:117] "RemoveContainer" containerID="e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.806928 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25544"] Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.827480 4904 scope.go:117] "RemoveContainer" containerID="d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.830438 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25544"] Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.918646 4904 scope.go:117] "RemoveContainer" containerID="a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01" Nov 21 15:41:12 crc kubenswrapper[4904]: E1121 15:41:12.920394 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01\": container with ID starting with a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01 not found: ID does not exist" containerID="a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.920560 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01"} err="failed to get container status \"a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01\": rpc error: code = NotFound desc = could not find container \"a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01\": container with ID starting with a939c7a87f9599c623dd68bf8e621aa6041e51a23cce78159ebd4e83f1196c01 not found: ID does not exist" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.920734 4904 scope.go:117] "RemoveContainer" containerID="e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f" Nov 21 15:41:12 crc kubenswrapper[4904]: E1121 15:41:12.925172 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f\": container with ID starting with e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f not found: ID does not exist" containerID="e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.925472 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f"} err="failed to get container status \"e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f\": rpc error: code = NotFound desc = could not find container \"e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f\": container with ID starting with e0754d7ffa913fb14e7fa497907c5e417798113061e04bb61e160c30d51f1b9f not found: ID does not exist" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.925586 4904 scope.go:117] "RemoveContainer" containerID="d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da" Nov 21 15:41:12 crc kubenswrapper[4904]: E1121 15:41:12.926165 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da\": container with ID starting with d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da not found: ID does not exist" containerID="d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da" Nov 21 15:41:12 crc kubenswrapper[4904]: I1121 15:41:12.926217 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da"} err="failed to get container status \"d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da\": rpc error: code = NotFound desc = could not find container \"d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da\": container with ID starting with d496561860b729f0edd841d9fad7c856dbfdbd8f999b151242130e75bdfb59da not found: ID does not exist" Nov 21 15:41:14 crc kubenswrapper[4904]: I1121 15:41:14.527752 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9809b75f-4806-45d5-9c48-976471838dff" path="/var/lib/kubelet/pods/9809b75f-4806-45d5-9c48-976471838dff/volumes" Nov 21 15:41:21 crc kubenswrapper[4904]: I1121 15:41:21.514694 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:41:21 crc kubenswrapper[4904]: E1121 15:41:21.516085 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:41:35 crc kubenswrapper[4904]: I1121 15:41:35.514273 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:41:35 crc kubenswrapper[4904]: E1121 15:41:35.515194 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:41:37 crc kubenswrapper[4904]: I1121 15:41:37.265104 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" podUID="e819c802-1c71-4668-bc99-5b41cc11c656" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 21 15:41:46 crc kubenswrapper[4904]: I1121 15:41:46.520224 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:41:46 crc kubenswrapper[4904]: E1121 15:41:46.521036 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:42:01 crc kubenswrapper[4904]: I1121 15:42:01.512976 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:42:01 crc kubenswrapper[4904]: E1121 15:42:01.513760 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:42:13 crc kubenswrapper[4904]: I1121 15:42:13.513623 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:42:13 crc kubenswrapper[4904]: E1121 15:42:13.514351 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:42:28 crc kubenswrapper[4904]: I1121 15:42:28.513547 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:42:28 crc kubenswrapper[4904]: E1121 15:42:28.514437 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:42:39 crc kubenswrapper[4904]: I1121 15:42:39.513021 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:42:39 crc kubenswrapper[4904]: E1121 15:42:39.514082 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:42:50 crc kubenswrapper[4904]: I1121 15:42:50.513831 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:42:50 crc kubenswrapper[4904]: E1121 15:42:50.514739 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:43:04 crc kubenswrapper[4904]: I1121 15:43:04.514853 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:43:04 crc kubenswrapper[4904]: E1121 15:43:04.515794 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:43:15 crc kubenswrapper[4904]: I1121 15:43:15.514629 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:43:15 crc kubenswrapper[4904]: E1121 15:43:15.515735 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:43:16 crc kubenswrapper[4904]: I1121 15:43:16.775566 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" podUID="4b290147-91ef-4734-961b-b61487960c33" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 15:43:16 crc kubenswrapper[4904]: I1121 15:43:16.776250 4904 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-hm8cc" podUID="4b290147-91ef-4734-961b-b61487960c33" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 21 15:43:26 crc kubenswrapper[4904]: I1121 15:43:26.522567 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:43:26 crc kubenswrapper[4904]: E1121 15:43:26.523363 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.477122 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h72xs"] Nov 21 15:43:31 crc kubenswrapper[4904]: E1121 15:43:31.478079 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="extract-content" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.478092 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="extract-content" Nov 21 15:43:31 crc kubenswrapper[4904]: E1121 15:43:31.478107 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="registry-server" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.478114 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="registry-server" Nov 21 15:43:31 crc kubenswrapper[4904]: E1121 15:43:31.478128 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="extract-utilities" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.478134 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="extract-utilities" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.478349 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="9809b75f-4806-45d5-9c48-976471838dff" containerName="registry-server" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.480081 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.500316 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h72xs"] Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.636097 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69stw\" (UniqueName: \"kubernetes.io/projected/5806264c-d78c-4edf-b7ee-763747c2cafd-kube-api-access-69stw\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.636837 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-utilities\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.637101 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-catalog-content\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.738873 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69stw\" (UniqueName: \"kubernetes.io/projected/5806264c-d78c-4edf-b7ee-763747c2cafd-kube-api-access-69stw\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.739031 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-utilities\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.739104 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-catalog-content\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.739744 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-catalog-content\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.739741 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-utilities\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.782528 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69stw\" (UniqueName: \"kubernetes.io/projected/5806264c-d78c-4edf-b7ee-763747c2cafd-kube-api-access-69stw\") pod \"certified-operators-h72xs\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:31 crc kubenswrapper[4904]: I1121 15:43:31.811227 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:32 crc kubenswrapper[4904]: I1121 15:43:32.409595 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h72xs"] Nov 21 15:43:32 crc kubenswrapper[4904]: W1121 15:43:32.419071 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5806264c_d78c_4edf_b7ee_763747c2cafd.slice/crio-b30a9db2fe53ee4b5505c4d5a92adab863f4236be58b6212ec879a6ee4279780 WatchSource:0}: Error finding container b30a9db2fe53ee4b5505c4d5a92adab863f4236be58b6212ec879a6ee4279780: Status 404 returned error can't find the container with id b30a9db2fe53ee4b5505c4d5a92adab863f4236be58b6212ec879a6ee4279780 Nov 21 15:43:33 crc kubenswrapper[4904]: I1121 15:43:33.330082 4904 generic.go:334] "Generic (PLEG): container finished" podID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerID="7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b" exitCode=0 Nov 21 15:43:33 crc kubenswrapper[4904]: I1121 15:43:33.330136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerDied","Data":"7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b"} Nov 21 15:43:33 crc kubenswrapper[4904]: I1121 15:43:33.331413 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerStarted","Data":"b30a9db2fe53ee4b5505c4d5a92adab863f4236be58b6212ec879a6ee4279780"} Nov 21 15:43:33 crc kubenswrapper[4904]: I1121 15:43:33.332501 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:43:35 crc kubenswrapper[4904]: I1121 15:43:35.352987 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerStarted","Data":"649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2"} Nov 21 15:43:37 crc kubenswrapper[4904]: I1121 15:43:37.513480 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:43:37 crc kubenswrapper[4904]: E1121 15:43:37.514107 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:43:39 crc kubenswrapper[4904]: I1121 15:43:39.404274 4904 generic.go:334] "Generic (PLEG): container finished" podID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerID="649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2" exitCode=0 Nov 21 15:43:39 crc kubenswrapper[4904]: I1121 15:43:39.404856 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerDied","Data":"649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2"} Nov 21 15:43:40 crc kubenswrapper[4904]: I1121 15:43:40.416968 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerStarted","Data":"01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20"} Nov 21 15:43:40 crc kubenswrapper[4904]: I1121 15:43:40.436175 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h72xs" podStartSLOduration=2.697847446 podStartE2EDuration="9.436148956s" podCreationTimestamp="2025-11-21 15:43:31 +0000 UTC" firstStartedPulling="2025-11-21 15:43:33.332220266 +0000 UTC m=+7887.453752818" lastFinishedPulling="2025-11-21 15:43:40.070521776 +0000 UTC m=+7894.192054328" observedRunningTime="2025-11-21 15:43:40.432702922 +0000 UTC m=+7894.554235474" watchObservedRunningTime="2025-11-21 15:43:40.436148956 +0000 UTC m=+7894.557681498" Nov 21 15:43:41 crc kubenswrapper[4904]: I1121 15:43:41.811648 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:41 crc kubenswrapper[4904]: I1121 15:43:41.811973 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:41 crc kubenswrapper[4904]: I1121 15:43:41.865648 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:51 crc kubenswrapper[4904]: I1121 15:43:51.513459 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:43:51 crc kubenswrapper[4904]: E1121 15:43:51.514249 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:43:51 crc kubenswrapper[4904]: I1121 15:43:51.860004 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:51 crc kubenswrapper[4904]: I1121 15:43:51.912198 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h72xs"] Nov 21 15:43:52 crc kubenswrapper[4904]: I1121 15:43:52.560196 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h72xs" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="registry-server" containerID="cri-o://01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20" gracePeriod=2 Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.147833 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.312085 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69stw\" (UniqueName: \"kubernetes.io/projected/5806264c-d78c-4edf-b7ee-763747c2cafd-kube-api-access-69stw\") pod \"5806264c-d78c-4edf-b7ee-763747c2cafd\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.312241 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-catalog-content\") pod \"5806264c-d78c-4edf-b7ee-763747c2cafd\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.312618 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-utilities\") pod \"5806264c-d78c-4edf-b7ee-763747c2cafd\" (UID: \"5806264c-d78c-4edf-b7ee-763747c2cafd\") " Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.314162 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-utilities" (OuterVolumeSpecName: "utilities") pod "5806264c-d78c-4edf-b7ee-763747c2cafd" (UID: "5806264c-d78c-4edf-b7ee-763747c2cafd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.322041 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5806264c-d78c-4edf-b7ee-763747c2cafd-kube-api-access-69stw" (OuterVolumeSpecName: "kube-api-access-69stw") pod "5806264c-d78c-4edf-b7ee-763747c2cafd" (UID: "5806264c-d78c-4edf-b7ee-763747c2cafd"). InnerVolumeSpecName "kube-api-access-69stw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.369884 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5806264c-d78c-4edf-b7ee-763747c2cafd" (UID: "5806264c-d78c-4edf-b7ee-763747c2cafd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.416109 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.416149 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69stw\" (UniqueName: \"kubernetes.io/projected/5806264c-d78c-4edf-b7ee-763747c2cafd-kube-api-access-69stw\") on node \"crc\" DevicePath \"\"" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.416164 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5806264c-d78c-4edf-b7ee-763747c2cafd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.574536 4904 generic.go:334] "Generic (PLEG): container finished" podID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerID="01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20" exitCode=0 Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.574609 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerDied","Data":"01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20"} Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.574611 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h72xs" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.574675 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h72xs" event={"ID":"5806264c-d78c-4edf-b7ee-763747c2cafd","Type":"ContainerDied","Data":"b30a9db2fe53ee4b5505c4d5a92adab863f4236be58b6212ec879a6ee4279780"} Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.574705 4904 scope.go:117] "RemoveContainer" containerID="01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.608839 4904 scope.go:117] "RemoveContainer" containerID="649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.617310 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h72xs"] Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.631762 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h72xs"] Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.658938 4904 scope.go:117] "RemoveContainer" containerID="7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.702941 4904 scope.go:117] "RemoveContainer" containerID="01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20" Nov 21 15:43:53 crc kubenswrapper[4904]: E1121 15:43:53.703546 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20\": container with ID starting with 01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20 not found: ID does not exist" containerID="01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.703695 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20"} err="failed to get container status \"01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20\": rpc error: code = NotFound desc = could not find container \"01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20\": container with ID starting with 01dcb0bcdcbed3d8f1e24c17e5e7818136ce8488f1e7453fecce163918166a20 not found: ID does not exist" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.704036 4904 scope.go:117] "RemoveContainer" containerID="649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2" Nov 21 15:43:53 crc kubenswrapper[4904]: E1121 15:43:53.704477 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2\": container with ID starting with 649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2 not found: ID does not exist" containerID="649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.704541 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2"} err="failed to get container status \"649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2\": rpc error: code = NotFound desc = could not find container \"649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2\": container with ID starting with 649aaa402574e99b9a698d930079f17188c51598b75c2a1f57055e762642bcf2 not found: ID does not exist" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.704582 4904 scope.go:117] "RemoveContainer" containerID="7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b" Nov 21 15:43:53 crc kubenswrapper[4904]: E1121 15:43:53.705197 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b\": container with ID starting with 7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b not found: ID does not exist" containerID="7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b" Nov 21 15:43:53 crc kubenswrapper[4904]: I1121 15:43:53.705248 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b"} err="failed to get container status \"7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b\": rpc error: code = NotFound desc = could not find container \"7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b\": container with ID starting with 7dd03000fd78414041fb8ba327cd8715dead361111e50e04c1d90c032e3aa73b not found: ID does not exist" Nov 21 15:43:54 crc kubenswrapper[4904]: I1121 15:43:54.560637 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" path="/var/lib/kubelet/pods/5806264c-d78c-4edf-b7ee-763747c2cafd/volumes" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.513327 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:44:02 crc kubenswrapper[4904]: E1121 15:44:02.514123 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.958886 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-htm7l"] Nov 21 15:44:02 crc kubenswrapper[4904]: E1121 15:44:02.960560 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="extract-utilities" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.960586 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="extract-utilities" Nov 21 15:44:02 crc kubenswrapper[4904]: E1121 15:44:02.960624 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="registry-server" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.960633 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="registry-server" Nov 21 15:44:02 crc kubenswrapper[4904]: E1121 15:44:02.960649 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="extract-content" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.960679 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="extract-content" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.961156 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="5806264c-d78c-4edf-b7ee-763747c2cafd" containerName="registry-server" Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.994252 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htm7l"] Nov 21 15:44:02 crc kubenswrapper[4904]: I1121 15:44:02.994470 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.159792 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvlh\" (UniqueName: \"kubernetes.io/projected/a9c60445-7574-4327-bd71-581b0871e780-kube-api-access-kmvlh\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.159960 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-catalog-content\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.160082 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-utilities\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.262407 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmvlh\" (UniqueName: \"kubernetes.io/projected/a9c60445-7574-4327-bd71-581b0871e780-kube-api-access-kmvlh\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.262514 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-catalog-content\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.262571 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-utilities\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.263010 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-utilities\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.263463 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-catalog-content\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.286377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmvlh\" (UniqueName: \"kubernetes.io/projected/a9c60445-7574-4327-bd71-581b0871e780-kube-api-access-kmvlh\") pod \"redhat-operators-htm7l\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.319836 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:03 crc kubenswrapper[4904]: I1121 15:44:03.920021 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htm7l"] Nov 21 15:44:04 crc kubenswrapper[4904]: I1121 15:44:04.696894 4904 generic.go:334] "Generic (PLEG): container finished" podID="a9c60445-7574-4327-bd71-581b0871e780" containerID="b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de" exitCode=0 Nov 21 15:44:04 crc kubenswrapper[4904]: I1121 15:44:04.696944 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerDied","Data":"b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de"} Nov 21 15:44:04 crc kubenswrapper[4904]: I1121 15:44:04.697177 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerStarted","Data":"267a4c0fb833f8dc494fd3d5030986c7fcbba44ba9b14886f9ae6066c17758fd"} Nov 21 15:44:07 crc kubenswrapper[4904]: I1121 15:44:07.739829 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerStarted","Data":"307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c"} Nov 21 15:44:14 crc kubenswrapper[4904]: I1121 15:44:14.513229 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:44:14 crc kubenswrapper[4904]: E1121 15:44:14.514006 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:44:19 crc kubenswrapper[4904]: I1121 15:44:19.897278 4904 generic.go:334] "Generic (PLEG): container finished" podID="a9c60445-7574-4327-bd71-581b0871e780" containerID="307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c" exitCode=0 Nov 21 15:44:19 crc kubenswrapper[4904]: I1121 15:44:19.897696 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerDied","Data":"307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c"} Nov 21 15:44:21 crc kubenswrapper[4904]: I1121 15:44:21.930144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerStarted","Data":"0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3"} Nov 21 15:44:21 crc kubenswrapper[4904]: I1121 15:44:21.952178 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-htm7l" podStartSLOduration=3.862379166 podStartE2EDuration="19.952158886s" podCreationTimestamp="2025-11-21 15:44:02 +0000 UTC" firstStartedPulling="2025-11-21 15:44:04.699011186 +0000 UTC m=+7918.820543738" lastFinishedPulling="2025-11-21 15:44:20.788790886 +0000 UTC m=+7934.910323458" observedRunningTime="2025-11-21 15:44:21.946193142 +0000 UTC m=+7936.067725694" watchObservedRunningTime="2025-11-21 15:44:21.952158886 +0000 UTC m=+7936.073691458" Nov 21 15:44:22 crc kubenswrapper[4904]: I1121 15:44:22.260754 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-6f94fcfbbf-pg26j" podUID="e819c802-1c71-4668-bc99-5b41cc11c656" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 21 15:44:23 crc kubenswrapper[4904]: I1121 15:44:23.320112 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:23 crc kubenswrapper[4904]: I1121 15:44:23.320193 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:44:24 crc kubenswrapper[4904]: I1121 15:44:24.367571 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:44:24 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:44:24 crc kubenswrapper[4904]: > Nov 21 15:44:25 crc kubenswrapper[4904]: I1121 15:44:25.513419 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:44:25 crc kubenswrapper[4904]: E1121 15:44:25.514106 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:44:34 crc kubenswrapper[4904]: I1121 15:44:34.370203 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:44:34 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:44:34 crc kubenswrapper[4904]: > Nov 21 15:44:36 crc kubenswrapper[4904]: I1121 15:44:36.522048 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:44:36 crc kubenswrapper[4904]: E1121 15:44:36.523510 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:44:44 crc kubenswrapper[4904]: I1121 15:44:44.376147 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:44:44 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:44:44 crc kubenswrapper[4904]: > Nov 21 15:44:48 crc kubenswrapper[4904]: I1121 15:44:48.513501 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:44:48 crc kubenswrapper[4904]: E1121 15:44:48.514351 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:44:54 crc kubenswrapper[4904]: I1121 15:44:54.366676 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:44:54 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:44:54 crc kubenswrapper[4904]: > Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.312087 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f"] Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.314477 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.328408 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f"] Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.335119 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-secret-volume\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.335615 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-config-volume\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.335904 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbxg\" (UniqueName: \"kubernetes.io/projected/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-kube-api-access-9fbxg\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.366889 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.366903 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.437718 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-config-volume\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.437821 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbxg\" (UniqueName: \"kubernetes.io/projected/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-kube-api-access-9fbxg\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.437888 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-secret-volume\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.440989 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-config-volume\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.445045 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-secret-volume\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.458507 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbxg\" (UniqueName: \"kubernetes.io/projected/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-kube-api-access-9fbxg\") pod \"collect-profiles-29395665-jmb7f\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.513472 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:45:00 crc kubenswrapper[4904]: E1121 15:45:00.513905 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:45:00 crc kubenswrapper[4904]: I1121 15:45:00.642546 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:01 crc kubenswrapper[4904]: I1121 15:45:01.131594 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f"] Nov 21 15:45:01 crc kubenswrapper[4904]: I1121 15:45:01.371802 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" event={"ID":"a135f62f-d441-41b5-9a3a-1fc30dd65cdb","Type":"ContainerStarted","Data":"4c3e3db1cc0a96fc72739d318f90111e79c5607ff428cd16d9d79abf9a013a0e"} Nov 21 15:45:02 crc kubenswrapper[4904]: I1121 15:45:02.382219 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" event={"ID":"a135f62f-d441-41b5-9a3a-1fc30dd65cdb","Type":"ContainerStarted","Data":"5d1c0f4e1a1dfe56edcff67cd5c441c6410e68d0d2dfeadbbf958fa84ee4f89b"} Nov 21 15:45:02 crc kubenswrapper[4904]: I1121 15:45:02.403851 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" podStartSLOduration=2.403831464 podStartE2EDuration="2.403831464s" podCreationTimestamp="2025-11-21 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 15:45:02.399208212 +0000 UTC m=+7976.520740774" watchObservedRunningTime="2025-11-21 15:45:02.403831464 +0000 UTC m=+7976.525364016" Nov 21 15:45:04 crc kubenswrapper[4904]: I1121 15:45:04.368533 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:45:04 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:45:04 crc kubenswrapper[4904]: > Nov 21 15:45:05 crc kubenswrapper[4904]: I1121 15:45:05.418762 4904 generic.go:334] "Generic (PLEG): container finished" podID="a135f62f-d441-41b5-9a3a-1fc30dd65cdb" containerID="5d1c0f4e1a1dfe56edcff67cd5c441c6410e68d0d2dfeadbbf958fa84ee4f89b" exitCode=0 Nov 21 15:45:05 crc kubenswrapper[4904]: I1121 15:45:05.418861 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" event={"ID":"a135f62f-d441-41b5-9a3a-1fc30dd65cdb","Type":"ContainerDied","Data":"5d1c0f4e1a1dfe56edcff67cd5c441c6410e68d0d2dfeadbbf958fa84ee4f89b"} Nov 21 15:45:06 crc kubenswrapper[4904]: I1121 15:45:06.992805 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.192674 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fbxg\" (UniqueName: \"kubernetes.io/projected/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-kube-api-access-9fbxg\") pod \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.192807 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-secret-volume\") pod \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.192838 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-config-volume\") pod \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\" (UID: \"a135f62f-d441-41b5-9a3a-1fc30dd65cdb\") " Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.193510 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "a135f62f-d441-41b5-9a3a-1fc30dd65cdb" (UID: "a135f62f-d441-41b5-9a3a-1fc30dd65cdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.194282 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.198461 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a135f62f-d441-41b5-9a3a-1fc30dd65cdb" (UID: "a135f62f-d441-41b5-9a3a-1fc30dd65cdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.199568 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-kube-api-access-9fbxg" (OuterVolumeSpecName: "kube-api-access-9fbxg") pod "a135f62f-d441-41b5-9a3a-1fc30dd65cdb" (UID: "a135f62f-d441-41b5-9a3a-1fc30dd65cdb"). InnerVolumeSpecName "kube-api-access-9fbxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.296514 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fbxg\" (UniqueName: \"kubernetes.io/projected/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-kube-api-access-9fbxg\") on node \"crc\" DevicePath \"\"" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.296564 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a135f62f-d441-41b5-9a3a-1fc30dd65cdb-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.437376 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" event={"ID":"a135f62f-d441-41b5-9a3a-1fc30dd65cdb","Type":"ContainerDied","Data":"4c3e3db1cc0a96fc72739d318f90111e79c5607ff428cd16d9d79abf9a013a0e"} Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.437417 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3e3db1cc0a96fc72739d318f90111e79c5607ff428cd16d9d79abf9a013a0e" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.437436 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395665-jmb7f" Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.640903 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd"] Nov 21 15:45:07 crc kubenswrapper[4904]: I1121 15:45:07.650987 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395620-p8bfd"] Nov 21 15:45:08 crc kubenswrapper[4904]: I1121 15:45:08.548852 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd48f58-48a5-4b78-b157-b3808d8591b2" path="/var/lib/kubelet/pods/2bd48f58-48a5-4b78-b157-b3808d8591b2/volumes" Nov 21 15:45:13 crc kubenswrapper[4904]: I1121 15:45:13.513774 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:45:13 crc kubenswrapper[4904]: E1121 15:45:13.514712 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:45:14 crc kubenswrapper[4904]: I1121 15:45:14.370645 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:45:14 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:45:14 crc kubenswrapper[4904]: > Nov 21 15:45:24 crc kubenswrapper[4904]: I1121 15:45:24.375935 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:45:24 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:45:24 crc kubenswrapper[4904]: > Nov 21 15:45:28 crc kubenswrapper[4904]: I1121 15:45:28.513792 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:45:28 crc kubenswrapper[4904]: E1121 15:45:28.515899 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:45:34 crc kubenswrapper[4904]: I1121 15:45:34.385971 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:45:34 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:45:34 crc kubenswrapper[4904]: > Nov 21 15:45:35 crc kubenswrapper[4904]: I1121 15:45:35.526715 4904 scope.go:117] "RemoveContainer" containerID="74ba0bd4ec04cd6366c66a127fe351746ee72c016f8313c8efeda183f72446e7" Nov 21 15:45:43 crc kubenswrapper[4904]: I1121 15:45:43.513435 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:45:43 crc kubenswrapper[4904]: E1121 15:45:43.514344 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:45:44 crc kubenswrapper[4904]: I1121 15:45:44.364169 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" probeResult="failure" output=< Nov 21 15:45:44 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:45:44 crc kubenswrapper[4904]: > Nov 21 15:45:53 crc kubenswrapper[4904]: I1121 15:45:53.369609 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:45:53 crc kubenswrapper[4904]: I1121 15:45:53.425582 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:45:53 crc kubenswrapper[4904]: I1121 15:45:53.621186 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htm7l"] Nov 21 15:45:54 crc kubenswrapper[4904]: I1121 15:45:54.946168 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-htm7l" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" containerID="cri-o://0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3" gracePeriod=2 Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.737096 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.902318 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvlh\" (UniqueName: \"kubernetes.io/projected/a9c60445-7574-4327-bd71-581b0871e780-kube-api-access-kmvlh\") pod \"a9c60445-7574-4327-bd71-581b0871e780\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.902504 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-utilities\") pod \"a9c60445-7574-4327-bd71-581b0871e780\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.902685 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-catalog-content\") pod \"a9c60445-7574-4327-bd71-581b0871e780\" (UID: \"a9c60445-7574-4327-bd71-581b0871e780\") " Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.909519 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-utilities" (OuterVolumeSpecName: "utilities") pod "a9c60445-7574-4327-bd71-581b0871e780" (UID: "a9c60445-7574-4327-bd71-581b0871e780"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.945294 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c60445-7574-4327-bd71-581b0871e780-kube-api-access-kmvlh" (OuterVolumeSpecName: "kube-api-access-kmvlh") pod "a9c60445-7574-4327-bd71-581b0871e780" (UID: "a9c60445-7574-4327-bd71-581b0871e780"). InnerVolumeSpecName "kube-api-access-kmvlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.963133 4904 generic.go:334] "Generic (PLEG): container finished" podID="a9c60445-7574-4327-bd71-581b0871e780" containerID="0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3" exitCode=0 Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.963182 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerDied","Data":"0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3"} Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.963197 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htm7l" Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.963218 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htm7l" event={"ID":"a9c60445-7574-4327-bd71-581b0871e780","Type":"ContainerDied","Data":"267a4c0fb833f8dc494fd3d5030986c7fcbba44ba9b14886f9ae6066c17758fd"} Nov 21 15:45:55 crc kubenswrapper[4904]: I1121 15:45:55.963238 4904 scope.go:117] "RemoveContainer" containerID="0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.005868 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmvlh\" (UniqueName: \"kubernetes.io/projected/a9c60445-7574-4327-bd71-581b0871e780-kube-api-access-kmvlh\") on node \"crc\" DevicePath \"\"" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.005906 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.026705 4904 scope.go:117] "RemoveContainer" containerID="307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.053387 4904 scope.go:117] "RemoveContainer" containerID="b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.076777 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9c60445-7574-4327-bd71-581b0871e780" (UID: "a9c60445-7574-4327-bd71-581b0871e780"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.107074 4904 scope.go:117] "RemoveContainer" containerID="0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3" Nov 21 15:45:56 crc kubenswrapper[4904]: E1121 15:45:56.108183 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3\": container with ID starting with 0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3 not found: ID does not exist" containerID="0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.108225 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3"} err="failed to get container status \"0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3\": rpc error: code = NotFound desc = could not find container \"0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3\": container with ID starting with 0e86ad072f1d5adf52c2a1b5f4920b2208fe33e24953c5bd5e4b1a332b2a76c3 not found: ID does not exist" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.108251 4904 scope.go:117] "RemoveContainer" containerID="307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.108424 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9c60445-7574-4327-bd71-581b0871e780-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:45:56 crc kubenswrapper[4904]: E1121 15:45:56.108669 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c\": container with ID starting with 307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c not found: ID does not exist" containerID="307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.108725 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c"} err="failed to get container status \"307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c\": rpc error: code = NotFound desc = could not find container \"307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c\": container with ID starting with 307675337ebde1bb63ee02e917f07d5e5cc28211b02451ea8cef14f477e6117c not found: ID does not exist" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.108756 4904 scope.go:117] "RemoveContainer" containerID="b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de" Nov 21 15:45:56 crc kubenswrapper[4904]: E1121 15:45:56.109134 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de\": container with ID starting with b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de not found: ID does not exist" containerID="b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.109168 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de"} err="failed to get container status \"b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de\": rpc error: code = NotFound desc = could not find container \"b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de\": container with ID starting with b1ab0c58ef3b80d90c922b0f43dbe9757c5c73c876793868c73c08edf4c211de not found: ID does not exist" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.299458 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htm7l"] Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.308410 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-htm7l"] Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.521888 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:45:56 crc kubenswrapper[4904]: E1121 15:45:56.522249 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:45:56 crc kubenswrapper[4904]: I1121 15:45:56.524997 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c60445-7574-4327-bd71-581b0871e780" path="/var/lib/kubelet/pods/a9c60445-7574-4327-bd71-581b0871e780/volumes" Nov 21 15:46:11 crc kubenswrapper[4904]: I1121 15:46:11.514621 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:46:12 crc kubenswrapper[4904]: I1121 15:46:12.198525 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"5b7edc3ad1df32604c5219d2187ace0108c4da9b9b177e3445b01fcf287877c7"} Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.962219 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2grw"] Nov 21 15:48:16 crc kubenswrapper[4904]: E1121 15:48:16.963219 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="extract-content" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.963234 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="extract-content" Nov 21 15:48:16 crc kubenswrapper[4904]: E1121 15:48:16.963255 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a135f62f-d441-41b5-9a3a-1fc30dd65cdb" containerName="collect-profiles" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.963261 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a135f62f-d441-41b5-9a3a-1fc30dd65cdb" containerName="collect-profiles" Nov 21 15:48:16 crc kubenswrapper[4904]: E1121 15:48:16.963281 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.963289 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" Nov 21 15:48:16 crc kubenswrapper[4904]: E1121 15:48:16.963302 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="extract-utilities" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.963309 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="extract-utilities" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.963567 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a135f62f-d441-41b5-9a3a-1fc30dd65cdb" containerName="collect-profiles" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.963588 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c60445-7574-4327-bd71-581b0871e780" containerName="registry-server" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.967783 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.976983 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2grw"] Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.994345 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-utilities\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.994433 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-catalog-content\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:16 crc kubenswrapper[4904]: I1121 15:48:16.994763 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lbh\" (UniqueName: \"kubernetes.io/projected/7a863417-28fc-484b-909c-9204dbd243c1-kube-api-access-c7lbh\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.096397 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-utilities\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.096456 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-catalog-content\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.096589 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7lbh\" (UniqueName: \"kubernetes.io/projected/7a863417-28fc-484b-909c-9204dbd243c1-kube-api-access-c7lbh\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.097131 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-utilities\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.097163 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-catalog-content\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.117012 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7lbh\" (UniqueName: \"kubernetes.io/projected/7a863417-28fc-484b-909c-9204dbd243c1-kube-api-access-c7lbh\") pod \"redhat-marketplace-v2grw\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:17 crc kubenswrapper[4904]: I1121 15:48:17.307723 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:18 crc kubenswrapper[4904]: I1121 15:48:18.143184 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2grw"] Nov 21 15:48:18 crc kubenswrapper[4904]: I1121 15:48:18.699386 4904 generic.go:334] "Generic (PLEG): container finished" podID="7a863417-28fc-484b-909c-9204dbd243c1" containerID="9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d" exitCode=0 Nov 21 15:48:18 crc kubenswrapper[4904]: I1121 15:48:18.699455 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerDied","Data":"9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d"} Nov 21 15:48:18 crc kubenswrapper[4904]: I1121 15:48:18.699643 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerStarted","Data":"0db14b0705bdfe7f48f0dab3a56efd52f70cd2d33cd1538f506f5d19acebdf92"} Nov 21 15:48:19 crc kubenswrapper[4904]: I1121 15:48:19.710962 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerStarted","Data":"53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0"} Nov 21 15:48:21 crc kubenswrapper[4904]: I1121 15:48:21.733670 4904 generic.go:334] "Generic (PLEG): container finished" podID="7a863417-28fc-484b-909c-9204dbd243c1" containerID="53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0" exitCode=0 Nov 21 15:48:21 crc kubenswrapper[4904]: I1121 15:48:21.733757 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerDied","Data":"53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0"} Nov 21 15:48:22 crc kubenswrapper[4904]: I1121 15:48:22.749430 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerStarted","Data":"4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645"} Nov 21 15:48:22 crc kubenswrapper[4904]: I1121 15:48:22.778114 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2grw" podStartSLOduration=3.328488733 podStartE2EDuration="6.77809024s" podCreationTimestamp="2025-11-21 15:48:16 +0000 UTC" firstStartedPulling="2025-11-21 15:48:18.701276712 +0000 UTC m=+8172.822809264" lastFinishedPulling="2025-11-21 15:48:22.150878219 +0000 UTC m=+8176.272410771" observedRunningTime="2025-11-21 15:48:22.768814636 +0000 UTC m=+8176.890347188" watchObservedRunningTime="2025-11-21 15:48:22.77809024 +0000 UTC m=+8176.899622792" Nov 21 15:48:27 crc kubenswrapper[4904]: I1121 15:48:27.308533 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:27 crc kubenswrapper[4904]: I1121 15:48:27.309749 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:27 crc kubenswrapper[4904]: I1121 15:48:27.382167 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:27 crc kubenswrapper[4904]: I1121 15:48:27.853011 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:28 crc kubenswrapper[4904]: I1121 15:48:28.113959 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:48:28 crc kubenswrapper[4904]: I1121 15:48:28.114051 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:48:30 crc kubenswrapper[4904]: I1121 15:48:30.950746 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2grw"] Nov 21 15:48:30 crc kubenswrapper[4904]: I1121 15:48:30.951244 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v2grw" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="registry-server" containerID="cri-o://4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645" gracePeriod=2 Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.468402 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.651809 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-utilities\") pod \"7a863417-28fc-484b-909c-9204dbd243c1\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.652337 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-catalog-content\") pod \"7a863417-28fc-484b-909c-9204dbd243c1\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.652459 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7lbh\" (UniqueName: \"kubernetes.io/projected/7a863417-28fc-484b-909c-9204dbd243c1-kube-api-access-c7lbh\") pod \"7a863417-28fc-484b-909c-9204dbd243c1\" (UID: \"7a863417-28fc-484b-909c-9204dbd243c1\") " Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.652905 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-utilities" (OuterVolumeSpecName: "utilities") pod "7a863417-28fc-484b-909c-9204dbd243c1" (UID: "7a863417-28fc-484b-909c-9204dbd243c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.653627 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.658920 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a863417-28fc-484b-909c-9204dbd243c1-kube-api-access-c7lbh" (OuterVolumeSpecName: "kube-api-access-c7lbh") pod "7a863417-28fc-484b-909c-9204dbd243c1" (UID: "7a863417-28fc-484b-909c-9204dbd243c1"). InnerVolumeSpecName "kube-api-access-c7lbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.667367 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a863417-28fc-484b-909c-9204dbd243c1" (UID: "7a863417-28fc-484b-909c-9204dbd243c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.755565 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a863417-28fc-484b-909c-9204dbd243c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.755606 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7lbh\" (UniqueName: \"kubernetes.io/projected/7a863417-28fc-484b-909c-9204dbd243c1-kube-api-access-c7lbh\") on node \"crc\" DevicePath \"\"" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.836843 4904 generic.go:334] "Generic (PLEG): container finished" podID="7a863417-28fc-484b-909c-9204dbd243c1" containerID="4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645" exitCode=0 Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.836913 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2grw" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.836879 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerDied","Data":"4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645"} Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.837049 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2grw" event={"ID":"7a863417-28fc-484b-909c-9204dbd243c1","Type":"ContainerDied","Data":"0db14b0705bdfe7f48f0dab3a56efd52f70cd2d33cd1538f506f5d19acebdf92"} Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.837087 4904 scope.go:117] "RemoveContainer" containerID="4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.862615 4904 scope.go:117] "RemoveContainer" containerID="53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.876413 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2grw"] Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.886798 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2grw"] Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.902435 4904 scope.go:117] "RemoveContainer" containerID="9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.950957 4904 scope.go:117] "RemoveContainer" containerID="4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645" Nov 21 15:48:31 crc kubenswrapper[4904]: E1121 15:48:31.951311 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645\": container with ID starting with 4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645 not found: ID does not exist" containerID="4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.951336 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645"} err="failed to get container status \"4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645\": rpc error: code = NotFound desc = could not find container \"4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645\": container with ID starting with 4bc046e2ab216072632a2fbe6d428143a776cfd8cffd3064456b8fdd50d73645 not found: ID does not exist" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.951354 4904 scope.go:117] "RemoveContainer" containerID="53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0" Nov 21 15:48:31 crc kubenswrapper[4904]: E1121 15:48:31.951621 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0\": container with ID starting with 53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0 not found: ID does not exist" containerID="53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.951637 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0"} err="failed to get container status \"53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0\": rpc error: code = NotFound desc = could not find container \"53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0\": container with ID starting with 53185a0a7a0395ffc1d966366fd6958205065b07447a47165b85c848c22036e0 not found: ID does not exist" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.951664 4904 scope.go:117] "RemoveContainer" containerID="9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d" Nov 21 15:48:31 crc kubenswrapper[4904]: E1121 15:48:31.951923 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d\": container with ID starting with 9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d not found: ID does not exist" containerID="9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d" Nov 21 15:48:31 crc kubenswrapper[4904]: I1121 15:48:31.951939 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d"} err="failed to get container status \"9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d\": rpc error: code = NotFound desc = could not find container \"9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d\": container with ID starting with 9022b34ee17009dfed6325c046661213ca864b25d090a0e692fe3d0706274b6d not found: ID does not exist" Nov 21 15:48:32 crc kubenswrapper[4904]: I1121 15:48:32.527318 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a863417-28fc-484b-909c-9204dbd243c1" path="/var/lib/kubelet/pods/7a863417-28fc-484b-909c-9204dbd243c1/volumes" Nov 21 15:48:58 crc kubenswrapper[4904]: I1121 15:48:58.113486 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:48:58 crc kubenswrapper[4904]: I1121 15:48:58.114053 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.113216 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.113821 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.113875 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.114770 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b7edc3ad1df32604c5219d2187ace0108c4da9b9b177e3445b01fcf287877c7"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.114832 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://5b7edc3ad1df32604c5219d2187ace0108c4da9b9b177e3445b01fcf287877c7" gracePeriod=600 Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.399935 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="5b7edc3ad1df32604c5219d2187ace0108c4da9b9b177e3445b01fcf287877c7" exitCode=0 Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.400006 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"5b7edc3ad1df32604c5219d2187ace0108c4da9b9b177e3445b01fcf287877c7"} Nov 21 15:49:28 crc kubenswrapper[4904]: I1121 15:49:28.400355 4904 scope.go:117] "RemoveContainer" containerID="49f8630b4628fec9715f604f3092c0aba38acc261da2b7f8c18af1a8495a5677" Nov 21 15:49:29 crc kubenswrapper[4904]: I1121 15:49:29.424075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760"} Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.554253 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bdpsb"] Nov 21 15:50:55 crc kubenswrapper[4904]: E1121 15:50:55.557212 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="extract-utilities" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.557231 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="extract-utilities" Nov 21 15:50:55 crc kubenswrapper[4904]: E1121 15:50:55.557256 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="extract-content" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.557262 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="extract-content" Nov 21 15:50:55 crc kubenswrapper[4904]: E1121 15:50:55.557291 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="registry-server" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.557298 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="registry-server" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.557531 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a863417-28fc-484b-909c-9204dbd243c1" containerName="registry-server" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.560886 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.568608 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdpsb"] Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.655335 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5vj\" (UniqueName: \"kubernetes.io/projected/fd8d9947-421a-4874-af70-ccacb3e470ff-kube-api-access-jg5vj\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.655591 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-catalog-content\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.656108 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-utilities\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.757780 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg5vj\" (UniqueName: \"kubernetes.io/projected/fd8d9947-421a-4874-af70-ccacb3e470ff-kube-api-access-jg5vj\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.757847 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-catalog-content\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.757959 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-utilities\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.758361 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-utilities\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.758604 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-catalog-content\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.779377 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg5vj\" (UniqueName: \"kubernetes.io/projected/fd8d9947-421a-4874-af70-ccacb3e470ff-kube-api-access-jg5vj\") pod \"community-operators-bdpsb\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:55 crc kubenswrapper[4904]: I1121 15:50:55.887944 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:50:56 crc kubenswrapper[4904]: I1121 15:50:56.903007 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bdpsb"] Nov 21 15:50:57 crc kubenswrapper[4904]: I1121 15:50:57.440175 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerStarted","Data":"5f2f67eb89a1d27de8df9c1dae5da12e8ec2f6c5f6974df31d96d980506bde98"} Nov 21 15:50:57 crc kubenswrapper[4904]: I1121 15:50:57.440479 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerStarted","Data":"b7d13e645c9648f5fa4e2b67c3b82657dac06dd5ffbbc2863f379474cdb099d0"} Nov 21 15:50:58 crc kubenswrapper[4904]: I1121 15:50:58.452365 4904 generic.go:334] "Generic (PLEG): container finished" podID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerID="5f2f67eb89a1d27de8df9c1dae5da12e8ec2f6c5f6974df31d96d980506bde98" exitCode=0 Nov 21 15:50:58 crc kubenswrapper[4904]: I1121 15:50:58.452499 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerDied","Data":"5f2f67eb89a1d27de8df9c1dae5da12e8ec2f6c5f6974df31d96d980506bde98"} Nov 21 15:50:58 crc kubenswrapper[4904]: I1121 15:50:58.456633 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:51:00 crc kubenswrapper[4904]: I1121 15:51:00.474813 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerStarted","Data":"85b20dfda10dcc16cf4d7c6c11daab899f5c2ab0d6149832372cfb7ba4052b82"} Nov 21 15:51:02 crc kubenswrapper[4904]: I1121 15:51:02.498325 4904 generic.go:334] "Generic (PLEG): container finished" podID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerID="85b20dfda10dcc16cf4d7c6c11daab899f5c2ab0d6149832372cfb7ba4052b82" exitCode=0 Nov 21 15:51:02 crc kubenswrapper[4904]: I1121 15:51:02.498945 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerDied","Data":"85b20dfda10dcc16cf4d7c6c11daab899f5c2ab0d6149832372cfb7ba4052b82"} Nov 21 15:51:03 crc kubenswrapper[4904]: I1121 15:51:03.510220 4904 generic.go:334] "Generic (PLEG): container finished" podID="24d17dbe-f722-4f81-b271-ca1e191780a8" containerID="c3a6be066c8f0a59279ebc2465d911fa357941462dceae14a52854bcd474292b" exitCode=0 Nov 21 15:51:03 crc kubenswrapper[4904]: I1121 15:51:03.510325 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"24d17dbe-f722-4f81-b271-ca1e191780a8","Type":"ContainerDied","Data":"c3a6be066c8f0a59279ebc2465d911fa357941462dceae14a52854bcd474292b"} Nov 21 15:51:03 crc kubenswrapper[4904]: I1121 15:51:03.514533 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerStarted","Data":"cc172b6a4482a203444a2d9ef059419e5413426bf669bcbfe9d1d98077af749e"} Nov 21 15:51:03 crc kubenswrapper[4904]: I1121 15:51:03.580935 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bdpsb" podStartSLOduration=3.9870535719999998 podStartE2EDuration="8.580916036s" podCreationTimestamp="2025-11-21 15:50:55 +0000 UTC" firstStartedPulling="2025-11-21 15:50:58.456358426 +0000 UTC m=+8332.577890978" lastFinishedPulling="2025-11-21 15:51:03.05022089 +0000 UTC m=+8337.171753442" observedRunningTime="2025-11-21 15:51:03.577969444 +0000 UTC m=+8337.699502016" watchObservedRunningTime="2025-11-21 15:51:03.580916036 +0000 UTC m=+8337.702448588" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.063559 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.144987 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-temporary\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145053 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ssh-key\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145109 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-workdir\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145252 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145294 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ca-certs\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145454 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hslnx\" (UniqueName: \"kubernetes.io/projected/24d17dbe-f722-4f81-b271-ca1e191780a8-kube-api-access-hslnx\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145495 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config-secret\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145533 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.145572 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-config-data\") pod \"24d17dbe-f722-4f81-b271-ca1e191780a8\" (UID: \"24d17dbe-f722-4f81-b271-ca1e191780a8\") " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.160434 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.167645 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.170234 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d17dbe-f722-4f81-b271-ca1e191780a8-kube-api-access-hslnx" (OuterVolumeSpecName: "kube-api-access-hslnx") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "kube-api-access-hslnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.174987 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.175826 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-config-data" (OuterVolumeSpecName: "config-data") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.229494 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.229529 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.232975 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250405 4904 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250451 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hslnx\" (UniqueName: \"kubernetes.io/projected/24d17dbe-f722-4f81-b271-ca1e191780a8-kube-api-access-hslnx\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250468 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250481 4904 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250495 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/24d17dbe-f722-4f81-b271-ca1e191780a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250507 4904 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250519 4904 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.250534 4904 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/24d17dbe-f722-4f81-b271-ca1e191780a8-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.268850 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "24d17dbe-f722-4f81-b271-ca1e191780a8" (UID: "24d17dbe-f722-4f81-b271-ca1e191780a8"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.278010 4904 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.353614 4904 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.353644 4904 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/24d17dbe-f722-4f81-b271-ca1e191780a8-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.533950 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"24d17dbe-f722-4f81-b271-ca1e191780a8","Type":"ContainerDied","Data":"ff4795d0c74139c7a107bdb700485bee2760678bac8d7a0b29750ef76216585a"} Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.534286 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff4795d0c74139c7a107bdb700485bee2760678bac8d7a0b29750ef76216585a" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.534001 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.888556 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.888607 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:51:05 crc kubenswrapper[4904]: I1121 15:51:05.948407 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.692401 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 15:51:10 crc kubenswrapper[4904]: E1121 15:51:10.693875 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d17dbe-f722-4f81-b271-ca1e191780a8" containerName="tempest-tests-tempest-tests-runner" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.693902 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d17dbe-f722-4f81-b271-ca1e191780a8" containerName="tempest-tests-tempest-tests-runner" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.694185 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d17dbe-f722-4f81-b271-ca1e191780a8" containerName="tempest-tests-tempest-tests-runner" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.695390 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.701435 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-zgzr7" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.704791 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.877174 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.877565 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjg4\" (UniqueName: \"kubernetes.io/projected/6675a319-1e0d-4549-b2f0-8e5307295f66-kube-api-access-msjg4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.979646 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.979762 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjg4\" (UniqueName: \"kubernetes.io/projected/6675a319-1e0d-4549-b2f0-8e5307295f66-kube-api-access-msjg4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:10 crc kubenswrapper[4904]: I1121 15:51:10.981333 4904 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:11 crc kubenswrapper[4904]: I1121 15:51:11.015989 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:11 crc kubenswrapper[4904]: I1121 15:51:11.018027 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjg4\" (UniqueName: \"kubernetes.io/projected/6675a319-1e0d-4549-b2f0-8e5307295f66-kube-api-access-msjg4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"6675a319-1e0d-4549-b2f0-8e5307295f66\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:11 crc kubenswrapper[4904]: I1121 15:51:11.028783 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 21 15:51:11 crc kubenswrapper[4904]: I1121 15:51:11.707773 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 21 15:51:11 crc kubenswrapper[4904]: W1121 15:51:11.719808 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6675a319_1e0d_4549_b2f0_8e5307295f66.slice/crio-bff7e01aec0f4b3d49ea1ea436c6b5fb3c6ee1b0e87eb96b64cc4f2bd912547a WatchSource:0}: Error finding container bff7e01aec0f4b3d49ea1ea436c6b5fb3c6ee1b0e87eb96b64cc4f2bd912547a: Status 404 returned error can't find the container with id bff7e01aec0f4b3d49ea1ea436c6b5fb3c6ee1b0e87eb96b64cc4f2bd912547a Nov 21 15:51:12 crc kubenswrapper[4904]: I1121 15:51:12.605610 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6675a319-1e0d-4549-b2f0-8e5307295f66","Type":"ContainerStarted","Data":"bff7e01aec0f4b3d49ea1ea436c6b5fb3c6ee1b0e87eb96b64cc4f2bd912547a"} Nov 21 15:51:14 crc kubenswrapper[4904]: I1121 15:51:14.625356 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"6675a319-1e0d-4549-b2f0-8e5307295f66","Type":"ContainerStarted","Data":"8be6ddd63fafd03535378a140eef8d261bd70be3608205df2a835800aa4386d0"} Nov 21 15:51:14 crc kubenswrapper[4904]: I1121 15:51:14.643750 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.58178516 podStartE2EDuration="4.643730179s" podCreationTimestamp="2025-11-21 15:51:10 +0000 UTC" firstStartedPulling="2025-11-21 15:51:11.724539981 +0000 UTC m=+8345.846072533" lastFinishedPulling="2025-11-21 15:51:13.786485 +0000 UTC m=+8347.908017552" observedRunningTime="2025-11-21 15:51:14.639816765 +0000 UTC m=+8348.761349317" watchObservedRunningTime="2025-11-21 15:51:14.643730179 +0000 UTC m=+8348.765262731" Nov 21 15:51:15 crc kubenswrapper[4904]: I1121 15:51:15.936411 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:51:15 crc kubenswrapper[4904]: I1121 15:51:15.986108 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdpsb"] Nov 21 15:51:16 crc kubenswrapper[4904]: I1121 15:51:16.644370 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bdpsb" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="registry-server" containerID="cri-o://cc172b6a4482a203444a2d9ef059419e5413426bf669bcbfe9d1d98077af749e" gracePeriod=2 Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.670499 4904 generic.go:334] "Generic (PLEG): container finished" podID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerID="cc172b6a4482a203444a2d9ef059419e5413426bf669bcbfe9d1d98077af749e" exitCode=0 Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.670949 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerDied","Data":"cc172b6a4482a203444a2d9ef059419e5413426bf669bcbfe9d1d98077af749e"} Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.854691 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.952416 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg5vj\" (UniqueName: \"kubernetes.io/projected/fd8d9947-421a-4874-af70-ccacb3e470ff-kube-api-access-jg5vj\") pod \"fd8d9947-421a-4874-af70-ccacb3e470ff\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.952535 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-catalog-content\") pod \"fd8d9947-421a-4874-af70-ccacb3e470ff\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.952578 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-utilities\") pod \"fd8d9947-421a-4874-af70-ccacb3e470ff\" (UID: \"fd8d9947-421a-4874-af70-ccacb3e470ff\") " Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.954066 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-utilities" (OuterVolumeSpecName: "utilities") pod "fd8d9947-421a-4874-af70-ccacb3e470ff" (UID: "fd8d9947-421a-4874-af70-ccacb3e470ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:51:17 crc kubenswrapper[4904]: I1121 15:51:17.960343 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8d9947-421a-4874-af70-ccacb3e470ff-kube-api-access-jg5vj" (OuterVolumeSpecName: "kube-api-access-jg5vj") pod "fd8d9947-421a-4874-af70-ccacb3e470ff" (UID: "fd8d9947-421a-4874-af70-ccacb3e470ff"). InnerVolumeSpecName "kube-api-access-jg5vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.005404 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd8d9947-421a-4874-af70-ccacb3e470ff" (UID: "fd8d9947-421a-4874-af70-ccacb3e470ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.055137 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg5vj\" (UniqueName: \"kubernetes.io/projected/fd8d9947-421a-4874-af70-ccacb3e470ff-kube-api-access-jg5vj\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.055174 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.055184 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd8d9947-421a-4874-af70-ccacb3e470ff-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.688015 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bdpsb" event={"ID":"fd8d9947-421a-4874-af70-ccacb3e470ff","Type":"ContainerDied","Data":"b7d13e645c9648f5fa4e2b67c3b82657dac06dd5ffbbc2863f379474cdb099d0"} Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.688083 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bdpsb" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.688888 4904 scope.go:117] "RemoveContainer" containerID="cc172b6a4482a203444a2d9ef059419e5413426bf669bcbfe9d1d98077af749e" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.726000 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bdpsb"] Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.729583 4904 scope.go:117] "RemoveContainer" containerID="85b20dfda10dcc16cf4d7c6c11daab899f5c2ab0d6149832372cfb7ba4052b82" Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.742783 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bdpsb"] Nov 21 15:51:18 crc kubenswrapper[4904]: I1121 15:51:18.762937 4904 scope.go:117] "RemoveContainer" containerID="5f2f67eb89a1d27de8df9c1dae5da12e8ec2f6c5f6974df31d96d980506bde98" Nov 21 15:51:20 crc kubenswrapper[4904]: I1121 15:51:20.530418 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" path="/var/lib/kubelet/pods/fd8d9947-421a-4874-af70-ccacb3e470ff/volumes" Nov 21 15:51:28 crc kubenswrapper[4904]: I1121 15:51:28.113943 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:51:28 crc kubenswrapper[4904]: I1121 15:51:28.114471 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.220341 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bnnlm/must-gather-dlgbb"] Nov 21 15:51:47 crc kubenswrapper[4904]: E1121 15:51:47.221342 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="extract-utilities" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.221357 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="extract-utilities" Nov 21 15:51:47 crc kubenswrapper[4904]: E1121 15:51:47.221378 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="registry-server" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.221384 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="registry-server" Nov 21 15:51:47 crc kubenswrapper[4904]: E1121 15:51:47.221417 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="extract-content" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.221423 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="extract-content" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.221669 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8d9947-421a-4874-af70-ccacb3e470ff" containerName="registry-server" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.222919 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.226845 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bnnlm"/"kube-root-ca.crt" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.227087 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bnnlm"/"openshift-service-ca.crt" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.228257 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bnnlm"/"default-dockercfg-mbdf6" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.240109 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bnnlm/must-gather-dlgbb"] Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.263234 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcpq\" (UniqueName: \"kubernetes.io/projected/eb0e4139-6747-439e-92ec-f496c5a5de62-kube-api-access-2lcpq\") pod \"must-gather-dlgbb\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.263385 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb0e4139-6747-439e-92ec-f496c5a5de62-must-gather-output\") pod \"must-gather-dlgbb\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.365549 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lcpq\" (UniqueName: \"kubernetes.io/projected/eb0e4139-6747-439e-92ec-f496c5a5de62-kube-api-access-2lcpq\") pod \"must-gather-dlgbb\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.365716 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb0e4139-6747-439e-92ec-f496c5a5de62-must-gather-output\") pod \"must-gather-dlgbb\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.366076 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb0e4139-6747-439e-92ec-f496c5a5de62-must-gather-output\") pod \"must-gather-dlgbb\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.384444 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lcpq\" (UniqueName: \"kubernetes.io/projected/eb0e4139-6747-439e-92ec-f496c5a5de62-kube-api-access-2lcpq\") pod \"must-gather-dlgbb\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:47 crc kubenswrapper[4904]: I1121 15:51:47.544245 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 15:51:48 crc kubenswrapper[4904]: I1121 15:51:48.084038 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bnnlm/must-gather-dlgbb"] Nov 21 15:51:49 crc kubenswrapper[4904]: I1121 15:51:49.037142 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" event={"ID":"eb0e4139-6747-439e-92ec-f496c5a5de62","Type":"ContainerStarted","Data":"11c91caafc41a32258b6d737f3af152c714e350efbeec5a7e69b07e4cef6e109"} Nov 21 15:51:58 crc kubenswrapper[4904]: I1121 15:51:58.113530 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:51:58 crc kubenswrapper[4904]: I1121 15:51:58.114126 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:51:59 crc kubenswrapper[4904]: I1121 15:51:59.142734 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" event={"ID":"eb0e4139-6747-439e-92ec-f496c5a5de62","Type":"ContainerStarted","Data":"0e9487674d5247d9126365b14b143a0b82a3719214accfd2775b6d067eb08e30"} Nov 21 15:51:59 crc kubenswrapper[4904]: I1121 15:51:59.143282 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" event={"ID":"eb0e4139-6747-439e-92ec-f496c5a5de62","Type":"ContainerStarted","Data":"94c0ab1d57f0df5f845758c1a38c2e9a0d9a1860be79ad32e11477c6231ec992"} Nov 21 15:51:59 crc kubenswrapper[4904]: I1121 15:51:59.161328 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" podStartSLOduration=1.653161194 podStartE2EDuration="12.161310523s" podCreationTimestamp="2025-11-21 15:51:47 +0000 UTC" firstStartedPulling="2025-11-21 15:51:48.088922647 +0000 UTC m=+8382.210455199" lastFinishedPulling="2025-11-21 15:51:58.597071976 +0000 UTC m=+8392.718604528" observedRunningTime="2025-11-21 15:51:59.159812027 +0000 UTC m=+8393.281344589" watchObservedRunningTime="2025-11-21 15:51:59.161310523 +0000 UTC m=+8393.282843065" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.155647 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-b79tc"] Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.157781 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.317954 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d046b7b-4610-491f-8636-d60e6073caa8-host\") pod \"crc-debug-b79tc\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.318064 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqj9\" (UniqueName: \"kubernetes.io/projected/3d046b7b-4610-491f-8636-d60e6073caa8-kube-api-access-8rqj9\") pod \"crc-debug-b79tc\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.421062 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d046b7b-4610-491f-8636-d60e6073caa8-host\") pod \"crc-debug-b79tc\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.421524 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rqj9\" (UniqueName: \"kubernetes.io/projected/3d046b7b-4610-491f-8636-d60e6073caa8-kube-api-access-8rqj9\") pod \"crc-debug-b79tc\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.422581 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d046b7b-4610-491f-8636-d60e6073caa8-host\") pod \"crc-debug-b79tc\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.477777 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rqj9\" (UniqueName: \"kubernetes.io/projected/3d046b7b-4610-491f-8636-d60e6073caa8-kube-api-access-8rqj9\") pod \"crc-debug-b79tc\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: I1121 15:52:08.486028 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:52:08 crc kubenswrapper[4904]: W1121 15:52:08.565385 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d046b7b_4610_491f_8636_d60e6073caa8.slice/crio-d8396d9010a0114b0d7de0be27ad624459e1da9da964146c378611a5e5a29929 WatchSource:0}: Error finding container d8396d9010a0114b0d7de0be27ad624459e1da9da964146c378611a5e5a29929: Status 404 returned error can't find the container with id d8396d9010a0114b0d7de0be27ad624459e1da9da964146c378611a5e5a29929 Nov 21 15:52:09 crc kubenswrapper[4904]: I1121 15:52:09.311038 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" event={"ID":"3d046b7b-4610-491f-8636-d60e6073caa8","Type":"ContainerStarted","Data":"d8396d9010a0114b0d7de0be27ad624459e1da9da964146c378611a5e5a29929"} Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.113326 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.113869 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.113929 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.114784 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.114832 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" gracePeriod=600 Nov 21 15:52:28 crc kubenswrapper[4904]: E1121 15:52:28.539086 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:52:28 crc kubenswrapper[4904]: E1121 15:52:28.591493 4904 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 21 15:52:28 crc kubenswrapper[4904]: E1121 15:52:28.595922 4904 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rqj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-b79tc_openshift-must-gather-bnnlm(3d046b7b-4610-491f-8636-d60e6073caa8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 21 15:52:28 crc kubenswrapper[4904]: E1121 15:52:28.598085 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" podUID="3d046b7b-4610-491f-8636-d60e6073caa8" Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.621864 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" exitCode=0 Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.623141 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760"} Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.623197 4904 scope.go:117] "RemoveContainer" containerID="5b7edc3ad1df32604c5219d2187ace0108c4da9b9b177e3445b01fcf287877c7" Nov 21 15:52:28 crc kubenswrapper[4904]: I1121 15:52:28.624035 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:52:28 crc kubenswrapper[4904]: E1121 15:52:28.624333 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:52:28 crc kubenswrapper[4904]: E1121 15:52:28.625398 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" podUID="3d046b7b-4610-491f-8636-d60e6073caa8" Nov 21 15:52:40 crc kubenswrapper[4904]: I1121 15:52:40.512763 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:52:40 crc kubenswrapper[4904]: E1121 15:52:40.513472 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:52:44 crc kubenswrapper[4904]: I1121 15:52:44.833424 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" event={"ID":"3d046b7b-4610-491f-8636-d60e6073caa8","Type":"ContainerStarted","Data":"bdb19f28dc0ee70148a40059b14b339795c38e6841f2a2c546830c8ec45b7754"} Nov 21 15:52:44 crc kubenswrapper[4904]: I1121 15:52:44.872901 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" podStartSLOduration=1.096959274 podStartE2EDuration="36.872876236s" podCreationTimestamp="2025-11-21 15:52:08 +0000 UTC" firstStartedPulling="2025-11-21 15:52:08.569076497 +0000 UTC m=+8402.690609049" lastFinishedPulling="2025-11-21 15:52:44.344993459 +0000 UTC m=+8438.466526011" observedRunningTime="2025-11-21 15:52:44.86227741 +0000 UTC m=+8438.983809962" watchObservedRunningTime="2025-11-21 15:52:44.872876236 +0000 UTC m=+8438.994408798" Nov 21 15:52:53 crc kubenswrapper[4904]: I1121 15:52:53.513944 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:52:53 crc kubenswrapper[4904]: E1121 15:52:53.514813 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:53:07 crc kubenswrapper[4904]: I1121 15:53:07.513395 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:53:07 crc kubenswrapper[4904]: E1121 15:53:07.514376 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:53:21 crc kubenswrapper[4904]: I1121 15:53:21.513125 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:53:21 crc kubenswrapper[4904]: E1121 15:53:21.513935 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:53:33 crc kubenswrapper[4904]: I1121 15:53:33.513602 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:53:33 crc kubenswrapper[4904]: E1121 15:53:33.514691 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:53:46 crc kubenswrapper[4904]: I1121 15:53:46.524577 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:53:46 crc kubenswrapper[4904]: E1121 15:53:46.525724 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:53:58 crc kubenswrapper[4904]: I1121 15:53:58.515791 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:53:58 crc kubenswrapper[4904]: E1121 15:53:58.516957 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:54:03 crc kubenswrapper[4904]: I1121 15:54:03.759949 4904 generic.go:334] "Generic (PLEG): container finished" podID="3d046b7b-4610-491f-8636-d60e6073caa8" containerID="bdb19f28dc0ee70148a40059b14b339795c38e6841f2a2c546830c8ec45b7754" exitCode=0 Nov 21 15:54:03 crc kubenswrapper[4904]: I1121 15:54:03.760084 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" event={"ID":"3d046b7b-4610-491f-8636-d60e6073caa8","Type":"ContainerDied","Data":"bdb19f28dc0ee70148a40059b14b339795c38e6841f2a2c546830c8ec45b7754"} Nov 21 15:54:04 crc kubenswrapper[4904]: I1121 15:54:04.948781 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:54:04 crc kubenswrapper[4904]: I1121 15:54:04.993137 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-b79tc"] Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.007998 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-b79tc"] Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.090027 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d046b7b-4610-491f-8636-d60e6073caa8-host\") pod \"3d046b7b-4610-491f-8636-d60e6073caa8\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.090277 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rqj9\" (UniqueName: \"kubernetes.io/projected/3d046b7b-4610-491f-8636-d60e6073caa8-kube-api-access-8rqj9\") pod \"3d046b7b-4610-491f-8636-d60e6073caa8\" (UID: \"3d046b7b-4610-491f-8636-d60e6073caa8\") " Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.090335 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d046b7b-4610-491f-8636-d60e6073caa8-host" (OuterVolumeSpecName: "host") pod "3d046b7b-4610-491f-8636-d60e6073caa8" (UID: "3d046b7b-4610-491f-8636-d60e6073caa8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.090812 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3d046b7b-4610-491f-8636-d60e6073caa8-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.097982 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d046b7b-4610-491f-8636-d60e6073caa8-kube-api-access-8rqj9" (OuterVolumeSpecName: "kube-api-access-8rqj9") pod "3d046b7b-4610-491f-8636-d60e6073caa8" (UID: "3d046b7b-4610-491f-8636-d60e6073caa8"). InnerVolumeSpecName "kube-api-access-8rqj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.193246 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rqj9\" (UniqueName: \"kubernetes.io/projected/3d046b7b-4610-491f-8636-d60e6073caa8-kube-api-access-8rqj9\") on node \"crc\" DevicePath \"\"" Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.788249 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8396d9010a0114b0d7de0be27ad624459e1da9da964146c378611a5e5a29929" Nov 21 15:54:05 crc kubenswrapper[4904]: I1121 15:54:05.788299 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-b79tc" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.168022 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-27fc7"] Nov 21 15:54:06 crc kubenswrapper[4904]: E1121 15:54:06.169182 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d046b7b-4610-491f-8636-d60e6073caa8" containerName="container-00" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.169201 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d046b7b-4610-491f-8636-d60e6073caa8" containerName="container-00" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.169432 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d046b7b-4610-491f-8636-d60e6073caa8" containerName="container-00" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.170304 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.319913 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp8fq\" (UniqueName: \"kubernetes.io/projected/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-kube-api-access-lp8fq\") pod \"crc-debug-27fc7\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.319977 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-host\") pod \"crc-debug-27fc7\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.423213 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp8fq\" (UniqueName: \"kubernetes.io/projected/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-kube-api-access-lp8fq\") pod \"crc-debug-27fc7\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.423314 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-host\") pod \"crc-debug-27fc7\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.423687 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-host\") pod \"crc-debug-27fc7\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.441634 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp8fq\" (UniqueName: \"kubernetes.io/projected/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-kube-api-access-lp8fq\") pod \"crc-debug-27fc7\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.492597 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.529246 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d046b7b-4610-491f-8636-d60e6073caa8" path="/var/lib/kubelet/pods/3d046b7b-4610-491f-8636-d60e6073caa8/volumes" Nov 21 15:54:06 crc kubenswrapper[4904]: I1121 15:54:06.808340 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" event={"ID":"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d","Type":"ContainerStarted","Data":"05b763b8c43447d0283ac1e444d193cf0cb64890743b24824b82f3d14f408315"} Nov 21 15:54:07 crc kubenswrapper[4904]: I1121 15:54:07.818579 4904 generic.go:334] "Generic (PLEG): container finished" podID="ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" containerID="29796af511a1b64317cac2c3e929b432c1b01f4040bdceb4b503ee289b6a5d2e" exitCode=0 Nov 21 15:54:07 crc kubenswrapper[4904]: I1121 15:54:07.818796 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" event={"ID":"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d","Type":"ContainerDied","Data":"29796af511a1b64317cac2c3e929b432c1b01f4040bdceb4b503ee289b6a5d2e"} Nov 21 15:54:08 crc kubenswrapper[4904]: I1121 15:54:08.973327 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.090909 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-host\") pod \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.091048 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-host" (OuterVolumeSpecName: "host") pod "ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" (UID: "ad6eae0b-eec2-4a5e-8c13-b8f0838d359d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.091836 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp8fq\" (UniqueName: \"kubernetes.io/projected/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-kube-api-access-lp8fq\") pod \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\" (UID: \"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d\") " Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.093001 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.097828 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-kube-api-access-lp8fq" (OuterVolumeSpecName: "kube-api-access-lp8fq") pod "ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" (UID: "ad6eae0b-eec2-4a5e-8c13-b8f0838d359d"). InnerVolumeSpecName "kube-api-access-lp8fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.196230 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp8fq\" (UniqueName: \"kubernetes.io/projected/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d-kube-api-access-lp8fq\") on node \"crc\" DevicePath \"\"" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.513548 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:54:09 crc kubenswrapper[4904]: E1121 15:54:09.515144 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.843598 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" event={"ID":"ad6eae0b-eec2-4a5e-8c13-b8f0838d359d","Type":"ContainerDied","Data":"05b763b8c43447d0283ac1e444d193cf0cb64890743b24824b82f3d14f408315"} Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.843694 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05b763b8c43447d0283ac1e444d193cf0cb64890743b24824b82f3d14f408315" Nov 21 15:54:09 crc kubenswrapper[4904]: I1121 15:54:09.843641 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-27fc7" Nov 21 15:54:10 crc kubenswrapper[4904]: I1121 15:54:10.639494 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-27fc7"] Nov 21 15:54:10 crc kubenswrapper[4904]: I1121 15:54:10.652615 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-27fc7"] Nov 21 15:54:11 crc kubenswrapper[4904]: I1121 15:54:11.824115 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-6mpcr"] Nov 21 15:54:11 crc kubenswrapper[4904]: E1121 15:54:11.824850 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" containerName="container-00" Nov 21 15:54:11 crc kubenswrapper[4904]: I1121 15:54:11.824865 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" containerName="container-00" Nov 21 15:54:11 crc kubenswrapper[4904]: I1121 15:54:11.825105 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" containerName="container-00" Nov 21 15:54:11 crc kubenswrapper[4904]: I1121 15:54:11.826104 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:11 crc kubenswrapper[4904]: I1121 15:54:11.956665 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjb26\" (UniqueName: \"kubernetes.io/projected/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-kube-api-access-kjb26\") pod \"crc-debug-6mpcr\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:11 crc kubenswrapper[4904]: I1121 15:54:11.956817 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-host\") pod \"crc-debug-6mpcr\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.058933 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjb26\" (UniqueName: \"kubernetes.io/projected/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-kube-api-access-kjb26\") pod \"crc-debug-6mpcr\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.059042 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-host\") pod \"crc-debug-6mpcr\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.059244 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-host\") pod \"crc-debug-6mpcr\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.077705 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjb26\" (UniqueName: \"kubernetes.io/projected/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-kube-api-access-kjb26\") pod \"crc-debug-6mpcr\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.144429 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.529082 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6eae0b-eec2-4a5e-8c13-b8f0838d359d" path="/var/lib/kubelet/pods/ad6eae0b-eec2-4a5e-8c13-b8f0838d359d/volumes" Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.879185 4904 generic.go:334] "Generic (PLEG): container finished" podID="026b3749-4a7b-4ea7-85d7-6eb0f30d6925" containerID="c836096b307d5ae464176fed8035eb4f943415f6bc8692a488e54d3f00575047" exitCode=0 Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.879302 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" event={"ID":"026b3749-4a7b-4ea7-85d7-6eb0f30d6925","Type":"ContainerDied","Data":"c836096b307d5ae464176fed8035eb4f943415f6bc8692a488e54d3f00575047"} Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.879965 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" event={"ID":"026b3749-4a7b-4ea7-85d7-6eb0f30d6925","Type":"ContainerStarted","Data":"a07ce37385caa972e5f9de1aac2dc202ea4e5f70bda5d75b0e55b41123d6dfe3"} Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.924506 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-6mpcr"] Nov 21 15:54:12 crc kubenswrapper[4904]: I1121 15:54:12.933707 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bnnlm/crc-debug-6mpcr"] Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.014713 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.106775 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjb26\" (UniqueName: \"kubernetes.io/projected/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-kube-api-access-kjb26\") pod \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.106905 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-host\") pod \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\" (UID: \"026b3749-4a7b-4ea7-85d7-6eb0f30d6925\") " Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.107005 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-host" (OuterVolumeSpecName: "host") pod "026b3749-4a7b-4ea7-85d7-6eb0f30d6925" (UID: "026b3749-4a7b-4ea7-85d7-6eb0f30d6925"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.107513 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-host\") on node \"crc\" DevicePath \"\"" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.113194 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-kube-api-access-kjb26" (OuterVolumeSpecName: "kube-api-access-kjb26") pod "026b3749-4a7b-4ea7-85d7-6eb0f30d6925" (UID: "026b3749-4a7b-4ea7-85d7-6eb0f30d6925"). InnerVolumeSpecName "kube-api-access-kjb26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.209380 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjb26\" (UniqueName: \"kubernetes.io/projected/026b3749-4a7b-4ea7-85d7-6eb0f30d6925-kube-api-access-kjb26\") on node \"crc\" DevicePath \"\"" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.526545 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="026b3749-4a7b-4ea7-85d7-6eb0f30d6925" path="/var/lib/kubelet/pods/026b3749-4a7b-4ea7-85d7-6eb0f30d6925/volumes" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.902691 4904 scope.go:117] "RemoveContainer" containerID="c836096b307d5ae464176fed8035eb4f943415f6bc8692a488e54d3f00575047" Nov 21 15:54:14 crc kubenswrapper[4904]: I1121 15:54:14.902817 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/crc-debug-6mpcr" Nov 21 15:54:22 crc kubenswrapper[4904]: I1121 15:54:22.513242 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:54:22 crc kubenswrapper[4904]: E1121 15:54:22.514065 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:54:37 crc kubenswrapper[4904]: I1121 15:54:37.513682 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:54:37 crc kubenswrapper[4904]: E1121 15:54:37.515470 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:54:40 crc kubenswrapper[4904]: I1121 15:54:40.896459 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-api/0.log" Nov 21 15:54:40 crc kubenswrapper[4904]: I1121 15:54:40.994787 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-evaluator/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.105599 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-listener/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.116381 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-notifier/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.203943 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f4785b448-tkwjh_5533e799-de81-4937-a2ca-8876e2bf3c22/barbican-api/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.331344 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f4785b448-tkwjh_5533e799-de81-4937-a2ca-8876e2bf3c22/barbican-api-log/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.441312 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75d77468c8-8lrt8_8b360c72-5d34-4e63-b653-3f3e80539384/barbican-keystone-listener/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.583201 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75d77468c8-8lrt8_8b360c72-5d34-4e63-b653-3f3e80539384/barbican-keystone-listener-log/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.684863 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f6485bcff-l7ccc_9a6718b1-07c1-4270-a030-cc36bef71bbc/barbican-worker/0.log" Nov 21 15:54:41 crc kubenswrapper[4904]: I1121 15:54:41.777211 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f6485bcff-l7ccc_9a6718b1-07c1-4270-a030-cc36bef71bbc/barbican-worker-log/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.049380 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q_0a28f51f-d088-4c6b-aec9-9fcde8dd9b94/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.175122 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/ceilometer-central-agent/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.279168 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/sg-core/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.289128 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/proxy-httpd/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.302160 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/ceilometer-notification-agent/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.508471 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn_9eb481be-e58b-4b7a-be65-188c8c4c6d70/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.524644 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb_e446baad-37a3-4206-a40a-67ac35889d21/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.800365 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fd487d38-5efb-41f0-88f9-ba5360b8c3cf/cinder-api-log/0.log" Nov 21 15:54:42 crc kubenswrapper[4904]: I1121 15:54:42.974452 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fd487d38-5efb-41f0-88f9-ba5360b8c3cf/cinder-api/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.181925 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4/probe/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.219071 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4/cinder-backup/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.362957 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2b6933e2-83af-4bab-b20b-498a29cc1a68/cinder-scheduler/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.580399 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2b6933e2-83af-4bab-b20b-498a29cc1a68/probe/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.663535 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_186f682e-35c9-47ac-8e62-264769272a1b/cinder-volume/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.702445 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_186f682e-35c9-47ac-8e62-264769272a1b/probe/0.log" Nov 21 15:54:43 crc kubenswrapper[4904]: I1121 15:54:43.942953 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm_48746c32-dedf-4967-ba11-9765f1a17ec7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:44 crc kubenswrapper[4904]: I1121 15:54:44.004778 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-svg6h_f292a2fb-beff-4c38-891a-db1e34c7157d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:44 crc kubenswrapper[4904]: I1121 15:54:44.518904 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5767ddb7c-mbpns_a141634c-9846-4d10-89f0-a5a28a50d016/init/0.log" Nov 21 15:54:44 crc kubenswrapper[4904]: I1121 15:54:44.786352 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5767ddb7c-mbpns_a141634c-9846-4d10-89f0-a5a28a50d016/init/0.log" Nov 21 15:54:44 crc kubenswrapper[4904]: I1121 15:54:44.820260 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5767ddb7c-mbpns_a141634c-9846-4d10-89f0-a5a28a50d016/dnsmasq-dns/0.log" Nov 21 15:54:44 crc kubenswrapper[4904]: I1121 15:54:44.898483 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8cf362a-fd51-4493-967e-aa0462ce4007/glance-httpd/0.log" Nov 21 15:54:45 crc kubenswrapper[4904]: I1121 15:54:45.023408 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8cf362a-fd51-4493-967e-aa0462ce4007/glance-log/0.log" Nov 21 15:54:45 crc kubenswrapper[4904]: I1121 15:54:45.096584 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e079f5ff-96a7-417b-bc95-a4578fe3a4ec/glance-httpd/0.log" Nov 21 15:54:45 crc kubenswrapper[4904]: I1121 15:54:45.127549 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e079f5ff-96a7-417b-bc95-a4578fe3a4ec/glance-log/0.log" Nov 21 15:54:45 crc kubenswrapper[4904]: I1121 15:54:45.743501 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5bcc58b9d9-fdhrg_c9c258f6-6c0c-4072-bd71-610209d2bbb9/heat-engine/0.log" Nov 21 15:54:46 crc kubenswrapper[4904]: I1121 15:54:46.186790 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-695bd477bb-gcrxw_7fc8baac-d51a-42f4-9444-c8e4172be134/horizon/0.log" Nov 21 15:54:46 crc kubenswrapper[4904]: I1121 15:54:46.538283 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5bd7cc4489-fmh8v_da6b9694-5408-45ce-8e1c-3cf05c404837/heat-api/0.log" Nov 21 15:54:46 crc kubenswrapper[4904]: I1121 15:54:46.665150 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx_2e2f263f-26f6-4c10-b020-898f112d23d6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:46 crc kubenswrapper[4904]: I1121 15:54:46.841868 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-695bd477bb-gcrxw_7fc8baac-d51a-42f4-9444-c8e4172be134/horizon-log/0.log" Nov 21 15:54:46 crc kubenswrapper[4904]: I1121 15:54:46.851367 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7669f9847d-7bgvn_55eba19c-92ed-4a40-81cb-9085d6becd76/heat-cfnapi/0.log" Nov 21 15:54:46 crc kubenswrapper[4904]: I1121 15:54:46.978460 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qzzvq_5863ea30-41cd-46f1-9c0f-95d2367aa9aa/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:47 crc kubenswrapper[4904]: I1121 15:54:47.130663 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395561-8g6qs_ba2790bd-7e69-4181-8240-95640e6696e4/keystone-cron/0.log" Nov 21 15:54:47 crc kubenswrapper[4904]: I1121 15:54:47.391320 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395621-lkq6r_b6002e56-ad4f-43e5-928f-602c88ed887d/keystone-cron/0.log" Nov 21 15:54:47 crc kubenswrapper[4904]: I1121 15:54:47.497331 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a18139cc-0f50-4c9f-bbb2-7637d2a3c299/kube-state-metrics/0.log" Nov 21 15:54:47 crc kubenswrapper[4904]: I1121 15:54:47.743146 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b64bb974-stp6c_9b547005-5eef-4c9a-91a0-7d796e269d05/keystone-api/0.log" Nov 21 15:54:47 crc kubenswrapper[4904]: I1121 15:54:47.797680 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc_b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:47 crc kubenswrapper[4904]: I1121 15:54:47.884905 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-k8zt5_16cb76b3-8c36-46f5-b221-df0d03da240e/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:48 crc kubenswrapper[4904]: I1121 15:54:48.446382 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9a0b60d4-f457-4e29-bb0e-f244826249aa/manila-api-log/0.log" Nov 21 15:54:48 crc kubenswrapper[4904]: I1121 15:54:48.579258 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9a0b60d4-f457-4e29-bb0e-f244826249aa/manila-api/0.log" Nov 21 15:54:48 crc kubenswrapper[4904]: I1121 15:54:48.637949 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8bea7feb-21d5-4b01-98b3-5b16737f0274/manila-scheduler/0.log" Nov 21 15:54:48 crc kubenswrapper[4904]: I1121 15:54:48.679881 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8bea7feb-21d5-4b01-98b3-5b16737f0274/probe/0.log" Nov 21 15:54:48 crc kubenswrapper[4904]: I1121 15:54:48.811083 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f2e2192d-1157-4025-9df6-deab99f244fd/probe/0.log" Nov 21 15:54:48 crc kubenswrapper[4904]: I1121 15:54:48.874512 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f2e2192d-1157-4025-9df6-deab99f244fd/manila-share/0.log" Nov 21 15:54:49 crc kubenswrapper[4904]: I1121 15:54:49.065129 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_78e4e986-d20d-4494-bff4-3cc0fb8af825/mysqld-exporter/0.log" Nov 21 15:54:49 crc kubenswrapper[4904]: I1121 15:54:49.304716 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3f2e9b6a-f2dd-4647-9652-e5f609740a53/memcached/0.log" Nov 21 15:54:49 crc kubenswrapper[4904]: I1121 15:54:49.557939 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bc5bc9bb5-xls6r_9814cd75-e32a-40e2-9509-08d8256ee1c7/neutron-httpd/0.log" Nov 21 15:54:49 crc kubenswrapper[4904]: I1121 15:54:49.566457 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bc5bc9bb5-xls6r_9814cd75-e32a-40e2-9509-08d8256ee1c7/neutron-api/0.log" Nov 21 15:54:49 crc kubenswrapper[4904]: I1121 15:54:49.584721 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq_7cc82b71-e69b-4404-843c-afdc4b449ab4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.054789 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cbabfd9a-3db8-4b71-886c-1986df601c51/nova-cell0-conductor-conductor/0.log" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.308681 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_89e907ba-30e5-4c8e-921d-6560b56a80d8/nova-cell1-conductor-conductor/0.log" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.513485 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:54:50 crc kubenswrapper[4904]: E1121 15:54:50.513770 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.589009 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_72657d59-241e-443a-9a53-f9c794b67958/nova-cell1-novncproxy-novncproxy/0.log" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.614037 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl_f238fb8b-7193-4412-ac72-19c3161f2735/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.621917 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9cab507d-ee28-4da3-9ed5-524c74530da5/nova-api-log/0.log" Nov 21 15:54:50 crc kubenswrapper[4904]: I1121 15:54:50.903793 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_56ae0c0e-1f08-4114-adf0-e4a915d519aa/nova-metadata-log/0.log" Nov 21 15:54:51 crc kubenswrapper[4904]: I1121 15:54:51.034673 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9cab507d-ee28-4da3-9ed5-524c74530da5/nova-api-api/0.log" Nov 21 15:54:51 crc kubenswrapper[4904]: I1121 15:54:51.268882 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2987646d-06ff-44a7-b766-ff6ff19ed796/mysql-bootstrap/0.log" Nov 21 15:54:51 crc kubenswrapper[4904]: I1121 15:54:51.279265 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e6891983-1117-4522-9431-2c73ac552c8a/nova-scheduler-scheduler/0.log" Nov 21 15:54:51 crc kubenswrapper[4904]: I1121 15:54:51.429534 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2987646d-06ff-44a7-b766-ff6ff19ed796/mysql-bootstrap/0.log" Nov 21 15:54:51 crc kubenswrapper[4904]: I1121 15:54:51.522191 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_554709ef-9d18-4a19-aded-0c8fe94e30e8/mysql-bootstrap/0.log" Nov 21 15:54:51 crc kubenswrapper[4904]: I1121 15:54:51.586461 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2987646d-06ff-44a7-b766-ff6ff19ed796/galera/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.019063 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_554709ef-9d18-4a19-aded-0c8fe94e30e8/mysql-bootstrap/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.107638 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_db36aaca-d216-45b3-b8f1-f7a94bae89e6/openstackclient/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.148827 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_554709ef-9d18-4a19-aded-0c8fe94e30e8/galera/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.293279 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gh8lv_48ec880a-b9f8-4b7a-9d69-98b730a07a02/ovn-controller/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.361938 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tp8md_c8dbdbe2-57e0-435a-ab3e-4dd526d584b3/openstack-network-exporter/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.575270 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovsdb-server-init/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.807283 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovsdb-server-init/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.822852 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovs-vswitchd/0.log" Nov 21 15:54:52 crc kubenswrapper[4904]: I1121 15:54:52.882536 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovsdb-server/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.085416 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qcxnt_c336a560-11e3-4740-b2f3-ebc5203fb0ad/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.176899 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5208bbb4-5fe6-4507-9308-35202ce25115/openstack-network-exporter/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.177258 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5208bbb4-5fe6-4507-9308-35202ce25115/ovn-northd/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.312833 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_56ae0c0e-1f08-4114-adf0-e4a915d519aa/nova-metadata-metadata/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.424275 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1ac2319f-b7ee-441c-b325-5ca2d83c87e4/openstack-network-exporter/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.452441 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1ac2319f-b7ee-441c-b325-5ca2d83c87e4/ovsdbserver-nb/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.619501 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b7f4b26a-f41d-478c-b706-67baa265aaf8/openstack-network-exporter/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.638504 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b7f4b26a-f41d-478c-b706-67baa265aaf8/ovsdbserver-sb/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.814520 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f758bd6b6-r5mk2_9653a120-e74b-4330-8e8f-faf95b13f63e/placement-api/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.967062 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f758bd6b6-r5mk2_9653a120-e74b-4330-8e8f-faf95b13f63e/placement-log/0.log" Nov 21 15:54:53 crc kubenswrapper[4904]: I1121 15:54:53.977696 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/init-config-reloader/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.124757 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/init-config-reloader/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.162723 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/thanos-sidecar/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.167544 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/prometheus/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.184334 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/config-reloader/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.345032 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b1f6e46-f0d4-421a-bb86-48f1d622cd97/setup-container/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.480279 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b1f6e46-f0d4-421a-bb86-48f1d622cd97/setup-container/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.524641 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b1f6e46-f0d4-421a-bb86-48f1d622cd97/rabbitmq/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.534638 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fdcbae10-10ee-4213-8758-ce56fbe6a27e/setup-container/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.792910 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fdcbae10-10ee-4213-8758-ce56fbe6a27e/setup-container/0.log" Nov 21 15:54:54 crc kubenswrapper[4904]: I1121 15:54:54.848922 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fdcbae10-10ee-4213-8758-ce56fbe6a27e/rabbitmq/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.181543 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t_facb16e4-106b-43c2-a62c-92103c2137ee/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.310531 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc_620a1fcb-b550-41c2-964f-f3212ff8a2d0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.408389 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8plpf_b17ec983-5d7e-4e15-807e-393999d4aa0e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.471793 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-kzpz9_52c0e44a-c467-4a11-a4f9-21f59b8cd3c5/ssh-known-hosts-edpm-deployment/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.689874 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f94fcfbbf-pg26j_e819c802-1c71-4668-bc99-5b41cc11c656/proxy-server/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.766544 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f94fcfbbf-pg26j_e819c802-1c71-4668-bc99-5b41cc11c656/proxy-httpd/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.799258 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x9zjr_c847f685-6c92-40df-9608-675e8f21c058/swift-ring-rebalance/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.951271 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-reaper/0.log" Nov 21 15:54:55 crc kubenswrapper[4904]: I1121 15:54:55.970144 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-auditor/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.032046 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-replicator/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.080464 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-server/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.191325 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-replicator/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.192433 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-auditor/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.219277 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-server/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.220167 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-updater/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.332232 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-auditor/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.431227 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-server/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.433836 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-replicator/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.445382 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-expirer/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.475087 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-updater/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.575006 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/rsync/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.618842 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/swift-recon-cron/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.694279 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-49lxz_168c7941-23ec-43f5-8849-04a31a928d0a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:56 crc kubenswrapper[4904]: I1121 15:54:56.816982 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc_82b0bfd1-2b9d-48c8-89dd-74db2d011083/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:57 crc kubenswrapper[4904]: I1121 15:54:57.055133 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6675a319-1e0d-4549-b2f0-8e5307295f66/test-operator-logs-container/0.log" Nov 21 15:54:57 crc kubenswrapper[4904]: I1121 15:54:57.192057 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2_81de75d6-c869-454d-a2a0-09557d478c99/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 15:54:57 crc kubenswrapper[4904]: I1121 15:54:57.614945 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_24d17dbe-f722-4f81-b271-ca1e191780a8/tempest-tests-tempest-tests-runner/0.log" Nov 21 15:55:04 crc kubenswrapper[4904]: I1121 15:55:04.513191 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:55:04 crc kubenswrapper[4904]: E1121 15:55:04.513947 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:55:18 crc kubenswrapper[4904]: I1121 15:55:18.513274 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:55:18 crc kubenswrapper[4904]: E1121 15:55:18.514370 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.429558 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/util/0.log" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.594016 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/util/0.log" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.628330 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/pull/0.log" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.653716 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/pull/0.log" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.821106 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/util/0.log" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.835186 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/extract/0.log" Nov 21 15:55:22 crc kubenswrapper[4904]: I1121 15:55:22.865839 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/pull/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.038717 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-j9dcf_50f3313d-ff99-4b0e-931a-c2a774375ae3/kube-rbac-proxy/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.092507 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-j9dcf_50f3313d-ff99-4b0e-931a-c2a774375ae3/manager/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.140517 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-hm8cc_4b290147-91ef-4734-961b-b61487960c33/kube-rbac-proxy/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.277168 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-hm8cc_4b290147-91ef-4734-961b-b61487960c33/manager/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.305512 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-lnp8w_14e3fbea-6dc2-44a4-81be-dfda27a6cdd8/kube-rbac-proxy/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.377238 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-lnp8w_14e3fbea-6dc2-44a4-81be-dfda27a6cdd8/manager/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.529618 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-bnt22_b91d5a1c-a2d5-4875-a23a-e43ae7f18937/kube-rbac-proxy/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.635968 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-bnt22_b91d5a1c-a2d5-4875-a23a-e43ae7f18937/manager/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.736329 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-ncvqt_55b86375-94e3-4c12-96b9-c5f581b3d8f3/kube-rbac-proxy/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.843967 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-ncvqt_55b86375-94e3-4c12-96b9-c5f581b3d8f3/manager/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.907783 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jp6bp_d81ae352-08d2-433c-b883-deeb78888945/kube-rbac-proxy/0.log" Nov 21 15:55:23 crc kubenswrapper[4904]: I1121 15:55:23.948860 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jp6bp_d81ae352-08d2-433c-b883-deeb78888945/manager/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.072068 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-h78bp_64f35f86-9389-4506-ad53-d42eec926447/kube-rbac-proxy/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.280561 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-h78bp_64f35f86-9389-4506-ad53-d42eec926447/manager/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.287362 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-wvfgt_15e30337-bd79-4d01-b3ab-2177d3c0609b/kube-rbac-proxy/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.288957 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-wvfgt_15e30337-bd79-4d01-b3ab-2177d3c0609b/manager/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.482551 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nkqwb_37bdccc0-c16d-4523-94d7-978d8313ca7f/kube-rbac-proxy/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.534372 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nkqwb_37bdccc0-c16d-4523-94d7-978d8313ca7f/manager/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.692804 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xr5mv_98bcfebc-c45f-4a2e-a21f-9c8cf892898c/kube-rbac-proxy/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.735926 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xr5mv_98bcfebc-c45f-4a2e-a21f-9c8cf892898c/manager/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.761076 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tg85w_72dacec1-d81b-46df-acd2-962095286389/kube-rbac-proxy/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.904352 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tg85w_72dacec1-d81b-46df-acd2-962095286389/manager/0.log" Nov 21 15:55:24 crc kubenswrapper[4904]: I1121 15:55:24.935206 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b8xrh_301f4657-8519-4071-a82b-b35f80739372/kube-rbac-proxy/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.010554 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b8xrh_301f4657-8519-4071-a82b-b35f80739372/manager/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.130688 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-n2t2q_4dc68816-ed56-4e8b-a41b-b91868bc57d3/kube-rbac-proxy/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.248306 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-n2t2q_4dc68816-ed56-4e8b-a41b-b91868bc57d3/manager/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.358063 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6bxqf_c8300ff9-666c-4d85-bd43-120d41529215/manager/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.377652 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6bxqf_c8300ff9-666c-4d85-bd43-120d41529215/kube-rbac-proxy/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.474531 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t_ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd/kube-rbac-proxy/0.log" Nov 21 15:55:25 crc kubenswrapper[4904]: I1121 15:55:25.634333 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t_ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd/manager/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.112013 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7gkpw_bd5157a7-e11b-444d-8e6b-d96382adc923/registry-server/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.240931 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bc9ddc77b-wmm4f_760055d2-e646-4466-a667-90292a69546a/operator/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.406190 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-qmmmc_887b5387-ce64-43b1-8755-2c401719a2d6/kube-rbac-proxy/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.539027 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-qmmmc_887b5387-ce64-43b1-8755-2c401719a2d6/manager/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.602086 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-lrfmt_27625471-8f27-449b-a245-558e079a38ab/kube-rbac-proxy/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.753165 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-lrfmt_27625471-8f27-449b-a245-558e079a38ab/manager/0.log" Nov 21 15:55:26 crc kubenswrapper[4904]: I1121 15:55:26.873938 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5lzvr_2e76f101-21a3-4b78-970e-55a016ee2a40/operator/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.020567 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-kpwdb_756ba318-ec48-4012-9d9d-108c3f1fad3c/manager/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.036001 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-kpwdb_756ba318-ec48-4012-9d9d-108c3f1fad3c/kube-rbac-proxy/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.228450 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_90ef3fb1-0f63-4fd0-94ef-2fefce011d23/kube-rbac-proxy/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.467987 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-vtkhq_a8dc5688-31cd-412c-91e9-3ae137d2a20a/kube-rbac-proxy/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.559264 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-vtkhq_a8dc5688-31cd-412c-91e9-3ae137d2a20a/manager/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.724629 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79fb5496bb-zp56v_80a18488-07da-4b66-b164-40f7d7027b5b/manager/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.728016 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_90ef3fb1-0f63-4fd0-94ef-2fefce011d23/manager/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.728954 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-57jrf_fb4141a1-4768-45ae-a8e8-ec1d1c01db4e/kube-rbac-proxy/0.log" Nov 21 15:55:27 crc kubenswrapper[4904]: I1121 15:55:27.801892 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-57jrf_fb4141a1-4768-45ae-a8e8-ec1d1c01db4e/manager/0.log" Nov 21 15:55:32 crc kubenswrapper[4904]: I1121 15:55:32.514202 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:55:32 crc kubenswrapper[4904]: E1121 15:55:32.515249 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:55:43 crc kubenswrapper[4904]: I1121 15:55:43.513360 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:55:43 crc kubenswrapper[4904]: E1121 15:55:43.514099 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:55:44 crc kubenswrapper[4904]: I1121 15:55:44.993164 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-tl29t_4b0cf0a0-036d-4e79-836e-208aa70df688/control-plane-machine-set-operator/0.log" Nov 21 15:55:45 crc kubenswrapper[4904]: I1121 15:55:45.170729 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdh9s_b29d96e3-7aa4-4626-a245-93ee36f7595f/machine-api-operator/0.log" Nov 21 15:55:45 crc kubenswrapper[4904]: I1121 15:55:45.173132 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdh9s_b29d96e3-7aa4-4626-a245-93ee36f7595f/kube-rbac-proxy/0.log" Nov 21 15:55:56 crc kubenswrapper[4904]: I1121 15:55:56.938037 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-w2h7s_e0eaefd8-c20d-4081-baa3-df1a92c06136/cert-manager-controller/0.log" Nov 21 15:55:57 crc kubenswrapper[4904]: I1121 15:55:57.090389 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-hz5x8_c9229a7d-9559-43dd-8470-5e0377837fa3/cert-manager-cainjector/0.log" Nov 21 15:55:57 crc kubenswrapper[4904]: I1121 15:55:57.149686 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-q7hpz_9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80/cert-manager-webhook/0.log" Nov 21 15:55:57 crc kubenswrapper[4904]: I1121 15:55:57.513556 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:55:57 crc kubenswrapper[4904]: E1121 15:55:57.513881 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:56:08 crc kubenswrapper[4904]: I1121 15:56:08.642403 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-vkkht_46f05915-4deb-45e5-8f4e-109e6c633d4e/nmstate-console-plugin/0.log" Nov 21 15:56:08 crc kubenswrapper[4904]: I1121 15:56:08.825876 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-49ws5_ae90deaf-c915-46ad-b731-7b79e746fffa/kube-rbac-proxy/0.log" Nov 21 15:56:08 crc kubenswrapper[4904]: I1121 15:56:08.827727 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-m899x_a317caad-8aa4-4620-9812-975486e6c3f6/nmstate-handler/0.log" Nov 21 15:56:08 crc kubenswrapper[4904]: I1121 15:56:08.876787 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-49ws5_ae90deaf-c915-46ad-b731-7b79e746fffa/nmstate-metrics/0.log" Nov 21 15:56:09 crc kubenswrapper[4904]: I1121 15:56:09.137643 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-ql6sj_3b9ad4cd-55bf-41e4-8740-b88b8be917c5/nmstate-operator/0.log" Nov 21 15:56:09 crc kubenswrapper[4904]: I1121 15:56:09.206500 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-5j92x_d39cd226-e819-4c84-9b6b-c28b8ae7d638/nmstate-webhook/0.log" Nov 21 15:56:10 crc kubenswrapper[4904]: I1121 15:56:10.513399 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:56:10 crc kubenswrapper[4904]: E1121 15:56:10.514034 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:56:20 crc kubenswrapper[4904]: I1121 15:56:20.569353 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/kube-rbac-proxy/0.log" Nov 21 15:56:20 crc kubenswrapper[4904]: I1121 15:56:20.641607 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/manager/0.log" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.054040 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zkfg6"] Nov 21 15:56:21 crc kubenswrapper[4904]: E1121 15:56:21.054495 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026b3749-4a7b-4ea7-85d7-6eb0f30d6925" containerName="container-00" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.054512 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="026b3749-4a7b-4ea7-85d7-6eb0f30d6925" containerName="container-00" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.054791 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="026b3749-4a7b-4ea7-85d7-6eb0f30d6925" containerName="container-00" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.059587 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.075082 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkfg6"] Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.099412 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-utilities\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.099536 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvlx\" (UniqueName: \"kubernetes.io/projected/74161691-91dc-4a3e-bc23-c433c8d5ac52-kube-api-access-qqvlx\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.100094 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-catalog-content\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.202041 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvlx\" (UniqueName: \"kubernetes.io/projected/74161691-91dc-4a3e-bc23-c433c8d5ac52-kube-api-access-qqvlx\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.202199 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-catalog-content\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.202267 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-utilities\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.203880 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-catalog-content\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.205028 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-utilities\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.243058 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvlx\" (UniqueName: \"kubernetes.io/projected/74161691-91dc-4a3e-bc23-c433c8d5ac52-kube-api-access-qqvlx\") pod \"redhat-operators-zkfg6\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:21 crc kubenswrapper[4904]: I1121 15:56:21.414941 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:22 crc kubenswrapper[4904]: I1121 15:56:22.804346 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zkfg6"] Nov 21 15:56:23 crc kubenswrapper[4904]: I1121 15:56:23.249958 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerStarted","Data":"58d53750279e57782cb2a297f17c3e12d0ff75a82e33f09731b97e76359dd1d1"} Nov 21 15:56:23 crc kubenswrapper[4904]: I1121 15:56:23.513800 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:56:23 crc kubenswrapper[4904]: E1121 15:56:23.514084 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:56:24 crc kubenswrapper[4904]: I1121 15:56:24.262390 4904 generic.go:334] "Generic (PLEG): container finished" podID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerID="8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa" exitCode=0 Nov 21 15:56:24 crc kubenswrapper[4904]: I1121 15:56:24.262432 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerDied","Data":"8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa"} Nov 21 15:56:24 crc kubenswrapper[4904]: I1121 15:56:24.265706 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 15:56:26 crc kubenswrapper[4904]: I1121 15:56:26.284920 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerStarted","Data":"a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137"} Nov 21 15:56:35 crc kubenswrapper[4904]: I1121 15:56:35.383978 4904 generic.go:334] "Generic (PLEG): container finished" podID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerID="a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137" exitCode=0 Nov 21 15:56:35 crc kubenswrapper[4904]: I1121 15:56:35.384064 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerDied","Data":"a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137"} Nov 21 15:56:35 crc kubenswrapper[4904]: I1121 15:56:35.513602 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:56:35 crc kubenswrapper[4904]: E1121 15:56:35.513973 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.063637 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-pc7gc_24f73a17-3c0e-4e6b-9a16-461582908e22/cluster-logging-operator/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.248158 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-mz46k_a982be9e-6e45-4a41-b736-9a673acaf3c0/collector/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.336606 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_4177ccdd-88ab-419a-a189-2f1af9b587e3/loki-compactor/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.425539 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerStarted","Data":"16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41"} Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.505472 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-6rt82_e4e29add-5250-42af-af50-40efeec82a2d/loki-distributor/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.548928 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-psz2n_fafeb02f-1864-4678-9846-d799fa1bc3c4/gateway/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.591819 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-psz2n_fafeb02f-1864-4678-9846-d799fa1bc3c4/opa/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.861772 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-zrb77_0c5c39d2-3a23-41c1-bec3-317ca97022f4/gateway/0.log" Nov 21 15:56:37 crc kubenswrapper[4904]: I1121 15:56:37.899584 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-zrb77_0c5c39d2-3a23-41c1-bec3-317ca97022f4/opa/0.log" Nov 21 15:56:38 crc kubenswrapper[4904]: I1121 15:56:38.108922 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6/loki-index-gateway/0.log" Nov 21 15:56:38 crc kubenswrapper[4904]: I1121 15:56:38.230726 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_fb578f45-397a-422e-af3c-04841227ef67/loki-ingester/0.log" Nov 21 15:56:38 crc kubenswrapper[4904]: I1121 15:56:38.362602 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-zd9gb_93fb0ef8-1685-4951-b2db-a0921787ff1a/loki-querier/0.log" Nov 21 15:56:38 crc kubenswrapper[4904]: I1121 15:56:38.472769 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-dlmmw_a74cf7e4-1b6c-4547-bb31-eea3e99e49a1/loki-query-frontend/0.log" Nov 21 15:56:41 crc kubenswrapper[4904]: I1121 15:56:41.416600 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:41 crc kubenswrapper[4904]: I1121 15:56:41.417242 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:56:42 crc kubenswrapper[4904]: I1121 15:56:42.474436 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkfg6" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" probeResult="failure" output=< Nov 21 15:56:42 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:56:42 crc kubenswrapper[4904]: > Nov 21 15:56:47 crc kubenswrapper[4904]: I1121 15:56:47.513287 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:56:47 crc kubenswrapper[4904]: E1121 15:56:47.514083 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:56:51 crc kubenswrapper[4904]: I1121 15:56:51.861085 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-q4s9b_37655929-be52-46f4-b914-4500bada3dac/kube-rbac-proxy/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.009257 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-q4s9b_37655929-be52-46f4-b914-4500bada3dac/controller/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.099101 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-9dv2t_12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209/frr-k8s-webhook-server/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.186758 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.382251 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.391603 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.436957 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.447088 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.467631 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkfg6" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" probeResult="failure" output=< Nov 21 15:56:52 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:56:52 crc kubenswrapper[4904]: > Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.628787 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.655221 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.671521 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.685702 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.876972 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.907878 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.915947 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/controller/0.log" Nov 21 15:56:52 crc kubenswrapper[4904]: I1121 15:56:52.935436 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.103232 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/frr-metrics/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.131941 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/kube-rbac-proxy-frr/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.145887 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/kube-rbac-proxy/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.322393 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/reloader/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.419138 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c88768dbc-5mgsg_20d3a8a7-19b0-4ae0-8f88-5e16945e90db/manager/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.603534 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8d7cf7c8c-8rg4q_bc2aa529-f9e7-4b13-94e8-e766abd5904f/webhook-server/0.log" Nov 21 15:56:53 crc kubenswrapper[4904]: I1121 15:56:53.824532 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-sfp7g_c20302bf-7009-4422-a47b-6b7fb5b14a99/kube-rbac-proxy/0.log" Nov 21 15:56:54 crc kubenswrapper[4904]: I1121 15:56:54.535275 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-sfp7g_c20302bf-7009-4422-a47b-6b7fb5b14a99/speaker/0.log" Nov 21 15:56:55 crc kubenswrapper[4904]: I1121 15:56:55.952416 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/frr/0.log" Nov 21 15:57:01 crc kubenswrapper[4904]: I1121 15:57:01.513858 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:57:01 crc kubenswrapper[4904]: E1121 15:57:01.514714 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:57:02 crc kubenswrapper[4904]: I1121 15:57:02.463756 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkfg6" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" probeResult="failure" output=< Nov 21 15:57:02 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:57:02 crc kubenswrapper[4904]: > Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.256893 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/util/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.479126 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/util/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.482091 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/pull/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.539949 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/pull/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.666158 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/util/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.779803 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/extract/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.789160 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/pull/0.log" Nov 21 15:57:07 crc kubenswrapper[4904]: I1121 15:57:07.944614 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/util/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.150762 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/util/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.160089 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/pull/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.179029 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/pull/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.437746 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/extract/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.446227 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/pull/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.466000 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/util/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.635592 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/util/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.878759 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/pull/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.900827 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/pull/0.log" Nov 21 15:57:08 crc kubenswrapper[4904]: I1121 15:57:08.918752 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/util/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.123763 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/pull/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.141803 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/extract/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.152273 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/util/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.311411 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/util/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.588363 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/pull/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.588716 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/pull/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.671457 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/util/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.798071 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/extract/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.803022 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/util/0.log" Nov 21 15:57:09 crc kubenswrapper[4904]: I1121 15:57:09.935738 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/pull/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.020222 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-utilities/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.176160 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-utilities/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.220776 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-content/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.236300 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-content/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.418197 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-utilities/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.450462 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-content/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.766706 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-utilities/0.log" Nov 21 15:57:10 crc kubenswrapper[4904]: I1121 15:57:10.962584 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-utilities/0.log" Nov 21 15:57:11 crc kubenswrapper[4904]: I1121 15:57:11.008323 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-content/0.log" Nov 21 15:57:11 crc kubenswrapper[4904]: I1121 15:57:11.048638 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-content/0.log" Nov 21 15:57:11 crc kubenswrapper[4904]: I1121 15:57:11.516290 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-content/0.log" Nov 21 15:57:11 crc kubenswrapper[4904]: I1121 15:57:11.542465 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-utilities/0.log" Nov 21 15:57:11 crc kubenswrapper[4904]: I1121 15:57:11.866619 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/util/0.log" Nov 21 15:57:11 crc kubenswrapper[4904]: I1121 15:57:11.955580 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/util/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.109735 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/pull/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.188393 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/pull/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.403559 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/extract/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.408598 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/util/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.408678 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/pull/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.491981 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkfg6" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" probeResult="failure" output=< Nov 21 15:57:12 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:57:12 crc kubenswrapper[4904]: > Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.739855 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ww7zw_dd6f0ea3-c491-4f01-a129-e2e0119808b7/marketplace-operator/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.940862 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/registry-server/0.log" Nov 21 15:57:12 crc kubenswrapper[4904]: I1121 15:57:12.991681 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-utilities/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.217894 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-utilities/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.221989 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/registry-server/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.249087 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-content/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.296197 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-content/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.529764 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-utilities/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.534266 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-content/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.550104 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-utilities/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.759150 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/registry-server/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.794946 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-utilities/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.797572 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-content/0.log" Nov 21 15:57:13 crc kubenswrapper[4904]: I1121 15:57:13.800908 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-content/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.069269 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-content/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.132311 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-utilities/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.154951 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/extract-utilities/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.363789 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/extract-content/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.446263 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/extract-utilities/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.460860 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/extract-content/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.720958 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/extract-content/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.763010 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/registry-server/0.log" Nov 21 15:57:14 crc kubenswrapper[4904]: I1121 15:57:14.766296 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zkfg6_74161691-91dc-4a3e-bc23-c433c8d5ac52/extract-utilities/0.log" Nov 21 15:57:15 crc kubenswrapper[4904]: I1121 15:57:15.097771 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/registry-server/0.log" Nov 21 15:57:15 crc kubenswrapper[4904]: I1121 15:57:15.514229 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:57:15 crc kubenswrapper[4904]: E1121 15:57:15.514627 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 15:57:22 crc kubenswrapper[4904]: I1121 15:57:22.509087 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zkfg6" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" probeResult="failure" output=< Nov 21 15:57:22 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 15:57:22 crc kubenswrapper[4904]: > Nov 21 15:57:28 crc kubenswrapper[4904]: I1121 15:57:28.514335 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 15:57:28 crc kubenswrapper[4904]: I1121 15:57:28.985106 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"ea4c4cca1717a8906d46b06506e812168b3bae84196c63b90056b1bc268c0f36"} Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.067336 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zkfg6" podStartSLOduration=55.862368646 podStartE2EDuration="1m8.067313619s" podCreationTimestamp="2025-11-21 15:56:21 +0000 UTC" firstStartedPulling="2025-11-21 15:56:24.264971771 +0000 UTC m=+8658.386504323" lastFinishedPulling="2025-11-21 15:56:36.469916744 +0000 UTC m=+8670.591449296" observedRunningTime="2025-11-21 15:56:37.455734531 +0000 UTC m=+8671.577267093" watchObservedRunningTime="2025-11-21 15:57:29.067313619 +0000 UTC m=+8723.188846171" Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.276810 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-gg6qn_c4f53eb1-20cb-4196-89e2-197cecdacc6c/prometheus-operator/0.log" Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.567951 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_7f97ec95-046c-4a0e-9ebf-3baf5fed1053/prometheus-operator-admission-webhook/0.log" Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.601603 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_3d488c9a-40e8-4cce-a260-c4610af92de8/prometheus-operator-admission-webhook/0.log" Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.784099 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-l4jvh_ebdd832d-d268-472a-b067-d61a6c520b7f/operator/0.log" Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.868702 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-nfqjt_6a4ca754-03de-434a-b5ba-3f0d288b1e0c/observability-ui-dashboards/0.log" Nov 21 15:57:29 crc kubenswrapper[4904]: I1121 15:57:29.975400 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-pqc2t_2d39b52b-adba-414a-ba10-66181feecef9/perses-operator/0.log" Nov 21 15:57:31 crc kubenswrapper[4904]: I1121 15:57:31.479313 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:57:31 crc kubenswrapper[4904]: I1121 15:57:31.549889 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:57:31 crc kubenswrapper[4904]: I1121 15:57:31.714689 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkfg6"] Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.028841 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zkfg6" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" containerID="cri-o://16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41" gracePeriod=2 Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.615239 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.754100 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-utilities\") pod \"74161691-91dc-4a3e-bc23-c433c8d5ac52\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.754540 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-catalog-content\") pod \"74161691-91dc-4a3e-bc23-c433c8d5ac52\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.754672 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvlx\" (UniqueName: \"kubernetes.io/projected/74161691-91dc-4a3e-bc23-c433c8d5ac52-kube-api-access-qqvlx\") pod \"74161691-91dc-4a3e-bc23-c433c8d5ac52\" (UID: \"74161691-91dc-4a3e-bc23-c433c8d5ac52\") " Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.754704 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-utilities" (OuterVolumeSpecName: "utilities") pod "74161691-91dc-4a3e-bc23-c433c8d5ac52" (UID: "74161691-91dc-4a3e-bc23-c433c8d5ac52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.755718 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.763385 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74161691-91dc-4a3e-bc23-c433c8d5ac52-kube-api-access-qqvlx" (OuterVolumeSpecName: "kube-api-access-qqvlx") pod "74161691-91dc-4a3e-bc23-c433c8d5ac52" (UID: "74161691-91dc-4a3e-bc23-c433c8d5ac52"). InnerVolumeSpecName "kube-api-access-qqvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.852369 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74161691-91dc-4a3e-bc23-c433c8d5ac52" (UID: "74161691-91dc-4a3e-bc23-c433c8d5ac52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.858077 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74161691-91dc-4a3e-bc23-c433c8d5ac52-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:57:33 crc kubenswrapper[4904]: I1121 15:57:33.858118 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvlx\" (UniqueName: \"kubernetes.io/projected/74161691-91dc-4a3e-bc23-c433c8d5ac52-kube-api-access-qqvlx\") on node \"crc\" DevicePath \"\"" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.041685 4904 generic.go:334] "Generic (PLEG): container finished" podID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerID="16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41" exitCode=0 Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.041732 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerDied","Data":"16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41"} Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.041753 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zkfg6" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.041772 4904 scope.go:117] "RemoveContainer" containerID="16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.041761 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zkfg6" event={"ID":"74161691-91dc-4a3e-bc23-c433c8d5ac52","Type":"ContainerDied","Data":"58d53750279e57782cb2a297f17c3e12d0ff75a82e33f09731b97e76359dd1d1"} Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.061019 4904 scope.go:117] "RemoveContainer" containerID="a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.090382 4904 scope.go:117] "RemoveContainer" containerID="8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.093830 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zkfg6"] Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.106643 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zkfg6"] Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.183998 4904 scope.go:117] "RemoveContainer" containerID="16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41" Nov 21 15:57:34 crc kubenswrapper[4904]: E1121 15:57:34.184795 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41\": container with ID starting with 16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41 not found: ID does not exist" containerID="16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.184860 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41"} err="failed to get container status \"16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41\": rpc error: code = NotFound desc = could not find container \"16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41\": container with ID starting with 16746694a7a50002a5326b1622c124fcd860c56f689d0e52102a99b2eb1afb41 not found: ID does not exist" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.184910 4904 scope.go:117] "RemoveContainer" containerID="a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137" Nov 21 15:57:34 crc kubenswrapper[4904]: E1121 15:57:34.185385 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137\": container with ID starting with a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137 not found: ID does not exist" containerID="a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.185490 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137"} err="failed to get container status \"a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137\": rpc error: code = NotFound desc = could not find container \"a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137\": container with ID starting with a2682dfe29a77acf852f840acef3813177ff8fd4ab67b028cd7337264dd9c137 not found: ID does not exist" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.185574 4904 scope.go:117] "RemoveContainer" containerID="8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa" Nov 21 15:57:34 crc kubenswrapper[4904]: E1121 15:57:34.185868 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa\": container with ID starting with 8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa not found: ID does not exist" containerID="8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.185895 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa"} err="failed to get container status \"8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa\": rpc error: code = NotFound desc = could not find container \"8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa\": container with ID starting with 8fec861f2e80a799b8b0ecbcba585fdafb69345537e22e94137d825b308ed8aa not found: ID does not exist" Nov 21 15:57:34 crc kubenswrapper[4904]: I1121 15:57:34.530595 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" path="/var/lib/kubelet/pods/74161691-91dc-4a3e-bc23-c433c8d5ac52/volumes" Nov 21 15:57:44 crc kubenswrapper[4904]: I1121 15:57:44.164093 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/kube-rbac-proxy/0.log" Nov 21 15:57:44 crc kubenswrapper[4904]: I1121 15:57:44.181354 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/manager/0.log" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.633225 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8lm"] Nov 21 15:58:24 crc kubenswrapper[4904]: E1121 15:58:24.634191 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="extract-utilities" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.634205 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="extract-utilities" Nov 21 15:58:24 crc kubenswrapper[4904]: E1121 15:58:24.634234 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.634240 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" Nov 21 15:58:24 crc kubenswrapper[4904]: E1121 15:58:24.634265 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="extract-content" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.634272 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="extract-content" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.634514 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="74161691-91dc-4a3e-bc23-c433c8d5ac52" containerName="registry-server" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.636459 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.647447 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8lm"] Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.790904 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-catalog-content\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.791551 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqlv6\" (UniqueName: \"kubernetes.io/projected/72e81e5a-86db-430f-a714-5719ad3c06e3-kube-api-access-tqlv6\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.791760 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-utilities\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.894425 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-utilities\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.894535 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-catalog-content\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.894578 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqlv6\" (UniqueName: \"kubernetes.io/projected/72e81e5a-86db-430f-a714-5719ad3c06e3-kube-api-access-tqlv6\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.895033 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-utilities\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.895148 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-catalog-content\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.919575 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqlv6\" (UniqueName: \"kubernetes.io/projected/72e81e5a-86db-430f-a714-5719ad3c06e3-kube-api-access-tqlv6\") pod \"redhat-marketplace-jd8lm\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:24 crc kubenswrapper[4904]: I1121 15:58:24.966758 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:25 crc kubenswrapper[4904]: I1121 15:58:25.854089 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8lm"] Nov 21 15:58:25 crc kubenswrapper[4904]: W1121 15:58:25.857619 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72e81e5a_86db_430f_a714_5719ad3c06e3.slice/crio-ae959666f44c75077179827191956ff7468f83af267d5cc18a2a3850409ab9bd WatchSource:0}: Error finding container ae959666f44c75077179827191956ff7468f83af267d5cc18a2a3850409ab9bd: Status 404 returned error can't find the container with id ae959666f44c75077179827191956ff7468f83af267d5cc18a2a3850409ab9bd Nov 21 15:58:26 crc kubenswrapper[4904]: I1121 15:58:26.743149 4904 generic.go:334] "Generic (PLEG): container finished" podID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerID="9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85" exitCode=0 Nov 21 15:58:26 crc kubenswrapper[4904]: I1121 15:58:26.743282 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerDied","Data":"9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85"} Nov 21 15:58:26 crc kubenswrapper[4904]: I1121 15:58:26.743459 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerStarted","Data":"ae959666f44c75077179827191956ff7468f83af267d5cc18a2a3850409ab9bd"} Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.014813 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mq8tw"] Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.018045 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.024924 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mq8tw"] Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.143168 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-utilities\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.143350 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-catalog-content\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.143403 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlspm\" (UniqueName: \"kubernetes.io/projected/7b39e74d-0833-424f-962a-12579c6597d7-kube-api-access-rlspm\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.249465 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-utilities\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.250238 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-catalog-content\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.250524 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlspm\" (UniqueName: \"kubernetes.io/projected/7b39e74d-0833-424f-962a-12579c6597d7-kube-api-access-rlspm\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.250113 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-utilities\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.250917 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-catalog-content\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.271392 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlspm\" (UniqueName: \"kubernetes.io/projected/7b39e74d-0833-424f-962a-12579c6597d7-kube-api-access-rlspm\") pod \"certified-operators-mq8tw\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.346811 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.757408 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerStarted","Data":"3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f"} Nov 21 15:58:27 crc kubenswrapper[4904]: I1121 15:58:27.928215 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mq8tw"] Nov 21 15:58:28 crc kubenswrapper[4904]: I1121 15:58:28.769768 4904 generic.go:334] "Generic (PLEG): container finished" podID="7b39e74d-0833-424f-962a-12579c6597d7" containerID="02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1" exitCode=0 Nov 21 15:58:28 crc kubenswrapper[4904]: I1121 15:58:28.770701 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerDied","Data":"02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1"} Nov 21 15:58:28 crc kubenswrapper[4904]: I1121 15:58:28.771036 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerStarted","Data":"07f0ac9323a79c16839de522ff40c32fa1f846c6c37c95617509f622a71062ae"} Nov 21 15:58:28 crc kubenswrapper[4904]: I1121 15:58:28.774626 4904 generic.go:334] "Generic (PLEG): container finished" podID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerID="3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f" exitCode=0 Nov 21 15:58:28 crc kubenswrapper[4904]: I1121 15:58:28.774831 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerDied","Data":"3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f"} Nov 21 15:58:29 crc kubenswrapper[4904]: I1121 15:58:29.807275 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerStarted","Data":"4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30"} Nov 21 15:58:29 crc kubenswrapper[4904]: I1121 15:58:29.810806 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerStarted","Data":"2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5"} Nov 21 15:58:29 crc kubenswrapper[4904]: I1121 15:58:29.831736 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jd8lm" podStartSLOduration=3.395434981 podStartE2EDuration="5.831710628s" podCreationTimestamp="2025-11-21 15:58:24 +0000 UTC" firstStartedPulling="2025-11-21 15:58:26.745160431 +0000 UTC m=+8780.866692983" lastFinishedPulling="2025-11-21 15:58:29.181436068 +0000 UTC m=+8783.302968630" observedRunningTime="2025-11-21 15:58:29.830036957 +0000 UTC m=+8783.951569519" watchObservedRunningTime="2025-11-21 15:58:29.831710628 +0000 UTC m=+8783.953243200" Nov 21 15:58:31 crc kubenswrapper[4904]: I1121 15:58:31.841865 4904 generic.go:334] "Generic (PLEG): container finished" podID="7b39e74d-0833-424f-962a-12579c6597d7" containerID="2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5" exitCode=0 Nov 21 15:58:31 crc kubenswrapper[4904]: I1121 15:58:31.844341 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerDied","Data":"2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5"} Nov 21 15:58:32 crc kubenswrapper[4904]: I1121 15:58:32.870591 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerStarted","Data":"235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3"} Nov 21 15:58:32 crc kubenswrapper[4904]: I1121 15:58:32.899857 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mq8tw" podStartSLOduration=3.148630172 podStartE2EDuration="6.899809746s" podCreationTimestamp="2025-11-21 15:58:26 +0000 UTC" firstStartedPulling="2025-11-21 15:58:28.773911263 +0000 UTC m=+8782.895443815" lastFinishedPulling="2025-11-21 15:58:32.525090837 +0000 UTC m=+8786.646623389" observedRunningTime="2025-11-21 15:58:32.891949575 +0000 UTC m=+8787.013482157" watchObservedRunningTime="2025-11-21 15:58:32.899809746 +0000 UTC m=+8787.021342298" Nov 21 15:58:34 crc kubenswrapper[4904]: I1121 15:58:34.966875 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:34 crc kubenswrapper[4904]: I1121 15:58:34.968311 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:35 crc kubenswrapper[4904]: I1121 15:58:35.019768 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:35 crc kubenswrapper[4904]: I1121 15:58:35.950715 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:36 crc kubenswrapper[4904]: I1121 15:58:36.608392 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8lm"] Nov 21 15:58:37 crc kubenswrapper[4904]: I1121 15:58:37.347087 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:37 crc kubenswrapper[4904]: I1121 15:58:37.348534 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:37 crc kubenswrapper[4904]: I1121 15:58:37.406062 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:37 crc kubenswrapper[4904]: I1121 15:58:37.922381 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jd8lm" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="registry-server" containerID="cri-o://4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30" gracePeriod=2 Nov 21 15:58:37 crc kubenswrapper[4904]: I1121 15:58:37.971728 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.473879 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.651643 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-catalog-content\") pod \"72e81e5a-86db-430f-a714-5719ad3c06e3\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.651814 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-utilities\") pod \"72e81e5a-86db-430f-a714-5719ad3c06e3\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.652033 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqlv6\" (UniqueName: \"kubernetes.io/projected/72e81e5a-86db-430f-a714-5719ad3c06e3-kube-api-access-tqlv6\") pod \"72e81e5a-86db-430f-a714-5719ad3c06e3\" (UID: \"72e81e5a-86db-430f-a714-5719ad3c06e3\") " Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.655199 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-utilities" (OuterVolumeSpecName: "utilities") pod "72e81e5a-86db-430f-a714-5719ad3c06e3" (UID: "72e81e5a-86db-430f-a714-5719ad3c06e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.663259 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e81e5a-86db-430f-a714-5719ad3c06e3-kube-api-access-tqlv6" (OuterVolumeSpecName: "kube-api-access-tqlv6") pod "72e81e5a-86db-430f-a714-5719ad3c06e3" (UID: "72e81e5a-86db-430f-a714-5719ad3c06e3"). InnerVolumeSpecName "kube-api-access-tqlv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.755627 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.755741 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqlv6\" (UniqueName: \"kubernetes.io/projected/72e81e5a-86db-430f-a714-5719ad3c06e3-kube-api-access-tqlv6\") on node \"crc\" DevicePath \"\"" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.859536 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72e81e5a-86db-430f-a714-5719ad3c06e3" (UID: "72e81e5a-86db-430f-a714-5719ad3c06e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.941932 4904 generic.go:334] "Generic (PLEG): container finished" podID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerID="4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30" exitCode=0 Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.941966 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerDied","Data":"4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30"} Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.942030 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8lm" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.943337 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8lm" event={"ID":"72e81e5a-86db-430f-a714-5719ad3c06e3","Type":"ContainerDied","Data":"ae959666f44c75077179827191956ff7468f83af267d5cc18a2a3850409ab9bd"} Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.943385 4904 scope.go:117] "RemoveContainer" containerID="4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.961079 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72e81e5a-86db-430f-a714-5719ad3c06e3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.969360 4904 scope.go:117] "RemoveContainer" containerID="3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f" Nov 21 15:58:38 crc kubenswrapper[4904]: I1121 15:58:38.994189 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8lm"] Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.007791 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8lm"] Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.017529 4904 scope.go:117] "RemoveContainer" containerID="9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.085614 4904 scope.go:117] "RemoveContainer" containerID="4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30" Nov 21 15:58:39 crc kubenswrapper[4904]: E1121 15:58:39.086051 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30\": container with ID starting with 4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30 not found: ID does not exist" containerID="4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.086085 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30"} err="failed to get container status \"4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30\": rpc error: code = NotFound desc = could not find container \"4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30\": container with ID starting with 4b5fc7306b4b8ae1b99b2cf2d9dd173ae78755a198243a8645c037eb4b4cff30 not found: ID does not exist" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.086106 4904 scope.go:117] "RemoveContainer" containerID="3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f" Nov 21 15:58:39 crc kubenswrapper[4904]: E1121 15:58:39.086460 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f\": container with ID starting with 3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f not found: ID does not exist" containerID="3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.086508 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f"} err="failed to get container status \"3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f\": rpc error: code = NotFound desc = could not find container \"3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f\": container with ID starting with 3e0fc4618d4e7fd881b6bfcf3d1b43de23fc9d2820b35c90e73a8405df3df18f not found: ID does not exist" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.086523 4904 scope.go:117] "RemoveContainer" containerID="9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85" Nov 21 15:58:39 crc kubenswrapper[4904]: E1121 15:58:39.086826 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85\": container with ID starting with 9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85 not found: ID does not exist" containerID="9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.086851 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85"} err="failed to get container status \"9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85\": rpc error: code = NotFound desc = could not find container \"9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85\": container with ID starting with 9023630b1e1e4cb8b7f46c24214c5432a4dff251fe080d33806d42772a62bd85 not found: ID does not exist" Nov 21 15:58:39 crc kubenswrapper[4904]: I1121 15:58:39.412081 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mq8tw"] Nov 21 15:58:40 crc kubenswrapper[4904]: I1121 15:58:40.527707 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" path="/var/lib/kubelet/pods/72e81e5a-86db-430f-a714-5719ad3c06e3/volumes" Nov 21 15:58:40 crc kubenswrapper[4904]: I1121 15:58:40.964915 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mq8tw" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="registry-server" containerID="cri-o://235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3" gracePeriod=2 Nov 21 15:58:41 crc kubenswrapper[4904]: I1121 15:58:41.968132 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:41 crc kubenswrapper[4904]: I1121 15:58:41.977556 4904 generic.go:334] "Generic (PLEG): container finished" podID="7b39e74d-0833-424f-962a-12579c6597d7" containerID="235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3" exitCode=0 Nov 21 15:58:41 crc kubenswrapper[4904]: I1121 15:58:41.977584 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerDied","Data":"235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3"} Nov 21 15:58:41 crc kubenswrapper[4904]: I1121 15:58:41.977632 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mq8tw" event={"ID":"7b39e74d-0833-424f-962a-12579c6597d7","Type":"ContainerDied","Data":"07f0ac9323a79c16839de522ff40c32fa1f846c6c37c95617509f622a71062ae"} Nov 21 15:58:41 crc kubenswrapper[4904]: I1121 15:58:41.977639 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mq8tw" Nov 21 15:58:41 crc kubenswrapper[4904]: I1121 15:58:41.977666 4904 scope.go:117] "RemoveContainer" containerID="235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.017545 4904 scope.go:117] "RemoveContainer" containerID="2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.046941 4904 scope.go:117] "RemoveContainer" containerID="02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.111844 4904 scope.go:117] "RemoveContainer" containerID="235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3" Nov 21 15:58:42 crc kubenswrapper[4904]: E1121 15:58:42.112364 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3\": container with ID starting with 235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3 not found: ID does not exist" containerID="235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.112404 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3"} err="failed to get container status \"235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3\": rpc error: code = NotFound desc = could not find container \"235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3\": container with ID starting with 235e54ad1a067da4226bf9d9bf7a372731a3c813f32f3f692185379b8975c6f3 not found: ID does not exist" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.112432 4904 scope.go:117] "RemoveContainer" containerID="2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5" Nov 21 15:58:42 crc kubenswrapper[4904]: E1121 15:58:42.112968 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5\": container with ID starting with 2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5 not found: ID does not exist" containerID="2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.113010 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5"} err="failed to get container status \"2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5\": rpc error: code = NotFound desc = could not find container \"2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5\": container with ID starting with 2e5c2dbded35f732090b09d78144c3a1a04d51a878a60011c3bc026e19e063e5 not found: ID does not exist" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.113038 4904 scope.go:117] "RemoveContainer" containerID="02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1" Nov 21 15:58:42 crc kubenswrapper[4904]: E1121 15:58:42.113466 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1\": container with ID starting with 02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1 not found: ID does not exist" containerID="02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.113493 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1"} err="failed to get container status \"02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1\": rpc error: code = NotFound desc = could not find container \"02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1\": container with ID starting with 02c64a5c10f85d11a61b717d4535cdaebb0d5c8ac3ffc9422676336cf502c7f1 not found: ID does not exist" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.129013 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-utilities\") pod \"7b39e74d-0833-424f-962a-12579c6597d7\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.129325 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlspm\" (UniqueName: \"kubernetes.io/projected/7b39e74d-0833-424f-962a-12579c6597d7-kube-api-access-rlspm\") pod \"7b39e74d-0833-424f-962a-12579c6597d7\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.129418 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-catalog-content\") pod \"7b39e74d-0833-424f-962a-12579c6597d7\" (UID: \"7b39e74d-0833-424f-962a-12579c6597d7\") " Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.130076 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-utilities" (OuterVolumeSpecName: "utilities") pod "7b39e74d-0833-424f-962a-12579c6597d7" (UID: "7b39e74d-0833-424f-962a-12579c6597d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.135056 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b39e74d-0833-424f-962a-12579c6597d7-kube-api-access-rlspm" (OuterVolumeSpecName: "kube-api-access-rlspm") pod "7b39e74d-0833-424f-962a-12579c6597d7" (UID: "7b39e74d-0833-424f-962a-12579c6597d7"). InnerVolumeSpecName "kube-api-access-rlspm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.180169 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b39e74d-0833-424f-962a-12579c6597d7" (UID: "7b39e74d-0833-424f-962a-12579c6597d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.232548 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.232585 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlspm\" (UniqueName: \"kubernetes.io/projected/7b39e74d-0833-424f-962a-12579c6597d7-kube-api-access-rlspm\") on node \"crc\" DevicePath \"\"" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.232617 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b39e74d-0833-424f-962a-12579c6597d7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.321389 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mq8tw"] Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.332432 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mq8tw"] Nov 21 15:58:42 crc kubenswrapper[4904]: I1121 15:58:42.526496 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b39e74d-0833-424f-962a-12579c6597d7" path="/var/lib/kubelet/pods/7b39e74d-0833-424f-962a-12579c6597d7/volumes" Nov 21 15:59:28 crc kubenswrapper[4904]: I1121 15:59:28.114020 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:59:28 crc kubenswrapper[4904]: I1121 15:59:28.115727 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 15:59:36 crc kubenswrapper[4904]: I1121 15:59:36.158234 4904 scope.go:117] "RemoveContainer" containerID="bdb19f28dc0ee70148a40059b14b339795c38e6841f2a2c546830c8ec45b7754" Nov 21 15:59:58 crc kubenswrapper[4904]: I1121 15:59:58.114042 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 15:59:58 crc kubenswrapper[4904]: I1121 15:59:58.114810 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.329409 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj"] Nov 21 16:00:00 crc kubenswrapper[4904]: E1121 16:00:00.332130 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="extract-utilities" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.332273 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="extract-utilities" Nov 21 16:00:00 crc kubenswrapper[4904]: E1121 16:00:00.332370 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="registry-server" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.332443 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="registry-server" Nov 21 16:00:00 crc kubenswrapper[4904]: E1121 16:00:00.332525 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="extract-content" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.332587 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="extract-content" Nov 21 16:00:00 crc kubenswrapper[4904]: E1121 16:00:00.332679 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="registry-server" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.332748 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="registry-server" Nov 21 16:00:00 crc kubenswrapper[4904]: E1121 16:00:00.332822 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="extract-utilities" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.332895 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="extract-utilities" Nov 21 16:00:00 crc kubenswrapper[4904]: E1121 16:00:00.332970 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="extract-content" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.333042 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="extract-content" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.333355 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e81e5a-86db-430f-a714-5719ad3c06e3" containerName="registry-server" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.333457 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b39e74d-0833-424f-962a-12579c6597d7" containerName="registry-server" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.334868 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.342734 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj"] Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.370566 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.370575 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.450679 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vmn7\" (UniqueName: \"kubernetes.io/projected/8a10172a-fd93-4e35-a566-5ee50596d743-kube-api-access-4vmn7\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.451187 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a10172a-fd93-4e35-a566-5ee50596d743-config-volume\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.451293 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a10172a-fd93-4e35-a566-5ee50596d743-secret-volume\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.553675 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vmn7\" (UniqueName: \"kubernetes.io/projected/8a10172a-fd93-4e35-a566-5ee50596d743-kube-api-access-4vmn7\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.553914 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a10172a-fd93-4e35-a566-5ee50596d743-config-volume\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.553935 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a10172a-fd93-4e35-a566-5ee50596d743-secret-volume\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.556329 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a10172a-fd93-4e35-a566-5ee50596d743-config-volume\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.560452 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a10172a-fd93-4e35-a566-5ee50596d743-secret-volume\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.576636 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vmn7\" (UniqueName: \"kubernetes.io/projected/8a10172a-fd93-4e35-a566-5ee50596d743-kube-api-access-4vmn7\") pod \"collect-profiles-29395680-z4fxj\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:00 crc kubenswrapper[4904]: I1121 16:00:00.673276 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.166559 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj"] Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.822477 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" event={"ID":"8a10172a-fd93-4e35-a566-5ee50596d743","Type":"ContainerStarted","Data":"9f13629d14f2eb8dcfc5d231857fa0d9b8df9766d185a7d7623f365a874bbed2"} Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.822848 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" event={"ID":"8a10172a-fd93-4e35-a566-5ee50596d743","Type":"ContainerStarted","Data":"7901a86d886833177d15603da683dfca7d847c925bf3f564f1145d312e69d259"} Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.827447 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerID="94c0ab1d57f0df5f845758c1a38c2e9a0d9a1860be79ad32e11477c6231ec992" exitCode=0 Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.827495 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" event={"ID":"eb0e4139-6747-439e-92ec-f496c5a5de62","Type":"ContainerDied","Data":"94c0ab1d57f0df5f845758c1a38c2e9a0d9a1860be79ad32e11477c6231ec992"} Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.827975 4904 scope.go:117] "RemoveContainer" containerID="94c0ab1d57f0df5f845758c1a38c2e9a0d9a1860be79ad32e11477c6231ec992" Nov 21 16:00:01 crc kubenswrapper[4904]: I1121 16:00:01.845189 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" podStartSLOduration=1.8451694029999999 podStartE2EDuration="1.845169403s" podCreationTimestamp="2025-11-21 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 16:00:01.838727046 +0000 UTC m=+8875.960259608" watchObservedRunningTime="2025-11-21 16:00:01.845169403 +0000 UTC m=+8875.966701955" Nov 21 16:00:02 crc kubenswrapper[4904]: I1121 16:00:02.827593 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bnnlm_must-gather-dlgbb_eb0e4139-6747-439e-92ec-f496c5a5de62/gather/0.log" Nov 21 16:00:03 crc kubenswrapper[4904]: I1121 16:00:03.858743 4904 generic.go:334] "Generic (PLEG): container finished" podID="8a10172a-fd93-4e35-a566-5ee50596d743" containerID="9f13629d14f2eb8dcfc5d231857fa0d9b8df9766d185a7d7623f365a874bbed2" exitCode=0 Nov 21 16:00:03 crc kubenswrapper[4904]: I1121 16:00:03.858829 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" event={"ID":"8a10172a-fd93-4e35-a566-5ee50596d743","Type":"ContainerDied","Data":"9f13629d14f2eb8dcfc5d231857fa0d9b8df9766d185a7d7623f365a874bbed2"} Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.268047 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.394850 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vmn7\" (UniqueName: \"kubernetes.io/projected/8a10172a-fd93-4e35-a566-5ee50596d743-kube-api-access-4vmn7\") pod \"8a10172a-fd93-4e35-a566-5ee50596d743\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.395522 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a10172a-fd93-4e35-a566-5ee50596d743-secret-volume\") pod \"8a10172a-fd93-4e35-a566-5ee50596d743\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.395936 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a10172a-fd93-4e35-a566-5ee50596d743-config-volume\") pod \"8a10172a-fd93-4e35-a566-5ee50596d743\" (UID: \"8a10172a-fd93-4e35-a566-5ee50596d743\") " Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.396790 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a10172a-fd93-4e35-a566-5ee50596d743-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a10172a-fd93-4e35-a566-5ee50596d743" (UID: "8a10172a-fd93-4e35-a566-5ee50596d743"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.402238 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a10172a-fd93-4e35-a566-5ee50596d743-kube-api-access-4vmn7" (OuterVolumeSpecName: "kube-api-access-4vmn7") pod "8a10172a-fd93-4e35-a566-5ee50596d743" (UID: "8a10172a-fd93-4e35-a566-5ee50596d743"). InnerVolumeSpecName "kube-api-access-4vmn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.404620 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a10172a-fd93-4e35-a566-5ee50596d743-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a10172a-fd93-4e35-a566-5ee50596d743" (UID: "8a10172a-fd93-4e35-a566-5ee50596d743"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.499095 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a10172a-fd93-4e35-a566-5ee50596d743-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.499433 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a10172a-fd93-4e35-a566-5ee50596d743-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.499530 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vmn7\" (UniqueName: \"kubernetes.io/projected/8a10172a-fd93-4e35-a566-5ee50596d743-kube-api-access-4vmn7\") on node \"crc\" DevicePath \"\"" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.890136 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" event={"ID":"8a10172a-fd93-4e35-a566-5ee50596d743","Type":"ContainerDied","Data":"7901a86d886833177d15603da683dfca7d847c925bf3f564f1145d312e69d259"} Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.890187 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7901a86d886833177d15603da683dfca7d847c925bf3f564f1145d312e69d259" Nov 21 16:00:05 crc kubenswrapper[4904]: I1121 16:00:05.890253 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395680-z4fxj" Nov 21 16:00:06 crc kubenswrapper[4904]: I1121 16:00:06.041241 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz"] Nov 21 16:00:06 crc kubenswrapper[4904]: I1121 16:00:06.051319 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395635-whrcz"] Nov 21 16:00:06 crc kubenswrapper[4904]: I1121 16:00:06.547435 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d7c9133-5ded-4a51-bc40-7d0f166fe17b" path="/var/lib/kubelet/pods/9d7c9133-5ded-4a51-bc40-7d0f166fe17b/volumes" Nov 21 16:00:11 crc kubenswrapper[4904]: I1121 16:00:11.824463 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bnnlm/must-gather-dlgbb"] Nov 21 16:00:11 crc kubenswrapper[4904]: I1121 16:00:11.825084 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="copy" containerID="cri-o://0e9487674d5247d9126365b14b143a0b82a3719214accfd2775b6d067eb08e30" gracePeriod=2 Nov 21 16:00:11 crc kubenswrapper[4904]: I1121 16:00:11.834472 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bnnlm/must-gather-dlgbb"] Nov 21 16:00:11 crc kubenswrapper[4904]: I1121 16:00:11.963054 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bnnlm_must-gather-dlgbb_eb0e4139-6747-439e-92ec-f496c5a5de62/copy/0.log" Nov 21 16:00:11 crc kubenswrapper[4904]: I1121 16:00:11.963876 4904 generic.go:334] "Generic (PLEG): container finished" podID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerID="0e9487674d5247d9126365b14b143a0b82a3719214accfd2775b6d067eb08e30" exitCode=143 Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.345700 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bnnlm_must-gather-dlgbb_eb0e4139-6747-439e-92ec-f496c5a5de62/copy/0.log" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.346832 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.465147 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb0e4139-6747-439e-92ec-f496c5a5de62-must-gather-output\") pod \"eb0e4139-6747-439e-92ec-f496c5a5de62\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.465700 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcpq\" (UniqueName: \"kubernetes.io/projected/eb0e4139-6747-439e-92ec-f496c5a5de62-kube-api-access-2lcpq\") pod \"eb0e4139-6747-439e-92ec-f496c5a5de62\" (UID: \"eb0e4139-6747-439e-92ec-f496c5a5de62\") " Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.473777 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0e4139-6747-439e-92ec-f496c5a5de62-kube-api-access-2lcpq" (OuterVolumeSpecName: "kube-api-access-2lcpq") pod "eb0e4139-6747-439e-92ec-f496c5a5de62" (UID: "eb0e4139-6747-439e-92ec-f496c5a5de62"). InnerVolumeSpecName "kube-api-access-2lcpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.568602 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lcpq\" (UniqueName: \"kubernetes.io/projected/eb0e4139-6747-439e-92ec-f496c5a5de62-kube-api-access-2lcpq\") on node \"crc\" DevicePath \"\"" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.669094 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb0e4139-6747-439e-92ec-f496c5a5de62-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "eb0e4139-6747-439e-92ec-f496c5a5de62" (UID: "eb0e4139-6747-439e-92ec-f496c5a5de62"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.671846 4904 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/eb0e4139-6747-439e-92ec-f496c5a5de62-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.977067 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bnnlm_must-gather-dlgbb_eb0e4139-6747-439e-92ec-f496c5a5de62/copy/0.log" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.977624 4904 scope.go:117] "RemoveContainer" containerID="0e9487674d5247d9126365b14b143a0b82a3719214accfd2775b6d067eb08e30" Nov 21 16:00:12 crc kubenswrapper[4904]: I1121 16:00:12.977747 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bnnlm/must-gather-dlgbb" Nov 21 16:00:13 crc kubenswrapper[4904]: I1121 16:00:13.002194 4904 scope.go:117] "RemoveContainer" containerID="94c0ab1d57f0df5f845758c1a38c2e9a0d9a1860be79ad32e11477c6231ec992" Nov 21 16:00:14 crc kubenswrapper[4904]: I1121 16:00:14.528729 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" path="/var/lib/kubelet/pods/eb0e4139-6747-439e-92ec-f496c5a5de62/volumes" Nov 21 16:00:28 crc kubenswrapper[4904]: I1121 16:00:28.114197 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:00:28 crc kubenswrapper[4904]: I1121 16:00:28.114663 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:00:28 crc kubenswrapper[4904]: I1121 16:00:28.114699 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 16:00:28 crc kubenswrapper[4904]: I1121 16:00:28.115260 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea4c4cca1717a8906d46b06506e812168b3bae84196c63b90056b1bc268c0f36"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 16:00:28 crc kubenswrapper[4904]: I1121 16:00:28.115304 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://ea4c4cca1717a8906d46b06506e812168b3bae84196c63b90056b1bc268c0f36" gracePeriod=600 Nov 21 16:00:29 crc kubenswrapper[4904]: I1121 16:00:29.300587 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="ea4c4cca1717a8906d46b06506e812168b3bae84196c63b90056b1bc268c0f36" exitCode=0 Nov 21 16:00:29 crc kubenswrapper[4904]: I1121 16:00:29.300719 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"ea4c4cca1717a8906d46b06506e812168b3bae84196c63b90056b1bc268c0f36"} Nov 21 16:00:29 crc kubenswrapper[4904]: I1121 16:00:29.301794 4904 scope.go:117] "RemoveContainer" containerID="e653253bd239f2925264e7566c94fd0a7046874de904693959b4ae6f6ef9c760" Nov 21 16:00:30 crc kubenswrapper[4904]: I1121 16:00:30.316240 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14"} Nov 21 16:00:36 crc kubenswrapper[4904]: I1121 16:00:36.273942 4904 scope.go:117] "RemoveContainer" containerID="9dc73a4c83b63eb01dbefd1789a8e4d7f3cbabd5029beae8a1d57c37f68e10fc" Nov 21 16:00:36 crc kubenswrapper[4904]: I1121 16:00:36.298047 4904 scope.go:117] "RemoveContainer" containerID="29796af511a1b64317cac2c3e929b432c1b01f4040bdceb4b503ee289b6a5d2e" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.148766 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29395681-6bhx2"] Nov 21 16:01:00 crc kubenswrapper[4904]: E1121 16:01:00.149761 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a10172a-fd93-4e35-a566-5ee50596d743" containerName="collect-profiles" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.149776 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a10172a-fd93-4e35-a566-5ee50596d743" containerName="collect-profiles" Nov 21 16:01:00 crc kubenswrapper[4904]: E1121 16:01:00.149792 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="copy" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.149799 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="copy" Nov 21 16:01:00 crc kubenswrapper[4904]: E1121 16:01:00.149824 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="gather" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.149832 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="gather" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.150033 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a10172a-fd93-4e35-a566-5ee50596d743" containerName="collect-profiles" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.150054 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="copy" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.150068 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0e4139-6747-439e-92ec-f496c5a5de62" containerName="gather" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.150905 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.165926 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395681-6bhx2"] Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.259426 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-config-data\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.259804 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-combined-ca-bundle\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.259868 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhkkf\" (UniqueName: \"kubernetes.io/projected/f97c22a6-b89d-41ef-b9ab-39839347dbb5-kube-api-access-zhkkf\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.260202 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-fernet-keys\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.362779 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-config-data\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.362834 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-combined-ca-bundle\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.362858 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhkkf\" (UniqueName: \"kubernetes.io/projected/f97c22a6-b89d-41ef-b9ab-39839347dbb5-kube-api-access-zhkkf\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.362983 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-fernet-keys\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.369744 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-combined-ca-bundle\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.369761 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-config-data\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.380289 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-fernet-keys\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.381094 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhkkf\" (UniqueName: \"kubernetes.io/projected/f97c22a6-b89d-41ef-b9ab-39839347dbb5-kube-api-access-zhkkf\") pod \"keystone-cron-29395681-6bhx2\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.471482 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:00 crc kubenswrapper[4904]: I1121 16:01:00.920814 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29395681-6bhx2"] Nov 21 16:01:01 crc kubenswrapper[4904]: I1121 16:01:01.659491 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395681-6bhx2" event={"ID":"f97c22a6-b89d-41ef-b9ab-39839347dbb5","Type":"ContainerStarted","Data":"a84d4cd3e52abb1efbfc53ba33003da92d4dcd15f97be95f0e000c9a073abb63"} Nov 21 16:01:01 crc kubenswrapper[4904]: I1121 16:01:01.659872 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395681-6bhx2" event={"ID":"f97c22a6-b89d-41ef-b9ab-39839347dbb5","Type":"ContainerStarted","Data":"a99110f80fd676b56fe79c1816f9fb0c9a35d96143ff86f9a515ea0d3e0084c4"} Nov 21 16:01:01 crc kubenswrapper[4904]: I1121 16:01:01.679531 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29395681-6bhx2" podStartSLOduration=1.679506719 podStartE2EDuration="1.679506719s" podCreationTimestamp="2025-11-21 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 16:01:01.675237735 +0000 UTC m=+8935.796770277" watchObservedRunningTime="2025-11-21 16:01:01.679506719 +0000 UTC m=+8935.801039271" Nov 21 16:01:04 crc kubenswrapper[4904]: I1121 16:01:04.687417 4904 generic.go:334] "Generic (PLEG): container finished" podID="f97c22a6-b89d-41ef-b9ab-39839347dbb5" containerID="a84d4cd3e52abb1efbfc53ba33003da92d4dcd15f97be95f0e000c9a073abb63" exitCode=0 Nov 21 16:01:04 crc kubenswrapper[4904]: I1121 16:01:04.687499 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395681-6bhx2" event={"ID":"f97c22a6-b89d-41ef-b9ab-39839347dbb5","Type":"ContainerDied","Data":"a84d4cd3e52abb1efbfc53ba33003da92d4dcd15f97be95f0e000c9a073abb63"} Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.061948 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.087132 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhkkf\" (UniqueName: \"kubernetes.io/projected/f97c22a6-b89d-41ef-b9ab-39839347dbb5-kube-api-access-zhkkf\") pod \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.087274 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-fernet-keys\") pod \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.087364 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-combined-ca-bundle\") pod \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.087412 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-config-data\") pod \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\" (UID: \"f97c22a6-b89d-41ef-b9ab-39839347dbb5\") " Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.094889 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97c22a6-b89d-41ef-b9ab-39839347dbb5-kube-api-access-zhkkf" (OuterVolumeSpecName: "kube-api-access-zhkkf") pod "f97c22a6-b89d-41ef-b9ab-39839347dbb5" (UID: "f97c22a6-b89d-41ef-b9ab-39839347dbb5"). InnerVolumeSpecName "kube-api-access-zhkkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.094990 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f97c22a6-b89d-41ef-b9ab-39839347dbb5" (UID: "f97c22a6-b89d-41ef-b9ab-39839347dbb5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.125289 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f97c22a6-b89d-41ef-b9ab-39839347dbb5" (UID: "f97c22a6-b89d-41ef-b9ab-39839347dbb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.152961 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-config-data" (OuterVolumeSpecName: "config-data") pod "f97c22a6-b89d-41ef-b9ab-39839347dbb5" (UID: "f97c22a6-b89d-41ef-b9ab-39839347dbb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.189540 4904 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-config-data\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.189576 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhkkf\" (UniqueName: \"kubernetes.io/projected/f97c22a6-b89d-41ef-b9ab-39839347dbb5-kube-api-access-zhkkf\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.189588 4904 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.189596 4904 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97c22a6-b89d-41ef-b9ab-39839347dbb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.707854 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29395681-6bhx2" event={"ID":"f97c22a6-b89d-41ef-b9ab-39839347dbb5","Type":"ContainerDied","Data":"a99110f80fd676b56fe79c1816f9fb0c9a35d96143ff86f9a515ea0d3e0084c4"} Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.707897 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a99110f80fd676b56fe79c1816f9fb0c9a35d96143ff86f9a515ea0d3e0084c4" Nov 21 16:01:06 crc kubenswrapper[4904]: I1121 16:01:06.707931 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29395681-6bhx2" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.816995 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qf5jn"] Nov 21 16:01:30 crc kubenswrapper[4904]: E1121 16:01:30.818262 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97c22a6-b89d-41ef-b9ab-39839347dbb5" containerName="keystone-cron" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.818280 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97c22a6-b89d-41ef-b9ab-39839347dbb5" containerName="keystone-cron" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.818509 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97c22a6-b89d-41ef-b9ab-39839347dbb5" containerName="keystone-cron" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.820458 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.828444 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qf5jn"] Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.924760 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld248\" (UniqueName: \"kubernetes.io/projected/91aa8418-271e-4f2e-a565-359f934d9f74-kube-api-access-ld248\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.925181 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-utilities\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:30 crc kubenswrapper[4904]: I1121 16:01:30.925222 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-catalog-content\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.027568 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld248\" (UniqueName: \"kubernetes.io/projected/91aa8418-271e-4f2e-a565-359f934d9f74-kube-api-access-ld248\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.027703 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-utilities\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.027729 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-catalog-content\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.028166 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-utilities\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.028603 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-catalog-content\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.058170 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld248\" (UniqueName: \"kubernetes.io/projected/91aa8418-271e-4f2e-a565-359f934d9f74-kube-api-access-ld248\") pod \"community-operators-qf5jn\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.145925 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.633925 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qf5jn"] Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.984091 4904 generic.go:334] "Generic (PLEG): container finished" podID="91aa8418-271e-4f2e-a565-359f934d9f74" containerID="364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256" exitCode=0 Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.984153 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerDied","Data":"364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256"} Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.984437 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerStarted","Data":"f498370b2bdebbfbe49116b0cb2841c8478fa1d82f340e0dba396fe542913810"} Nov 21 16:01:31 crc kubenswrapper[4904]: I1121 16:01:31.989921 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 16:01:33 crc kubenswrapper[4904]: I1121 16:01:33.004416 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerStarted","Data":"dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4"} Nov 21 16:01:35 crc kubenswrapper[4904]: I1121 16:01:35.028539 4904 generic.go:334] "Generic (PLEG): container finished" podID="91aa8418-271e-4f2e-a565-359f934d9f74" containerID="dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4" exitCode=0 Nov 21 16:01:35 crc kubenswrapper[4904]: I1121 16:01:35.029175 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerDied","Data":"dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4"} Nov 21 16:01:36 crc kubenswrapper[4904]: I1121 16:01:36.042795 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerStarted","Data":"4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257"} Nov 21 16:01:36 crc kubenswrapper[4904]: I1121 16:01:36.072781 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qf5jn" podStartSLOduration=2.635060525 podStartE2EDuration="6.072756357s" podCreationTimestamp="2025-11-21 16:01:30 +0000 UTC" firstStartedPulling="2025-11-21 16:01:31.989466299 +0000 UTC m=+8966.110998851" lastFinishedPulling="2025-11-21 16:01:35.427162131 +0000 UTC m=+8969.548694683" observedRunningTime="2025-11-21 16:01:36.060497201 +0000 UTC m=+8970.182029753" watchObservedRunningTime="2025-11-21 16:01:36.072756357 +0000 UTC m=+8970.194288909" Nov 21 16:01:41 crc kubenswrapper[4904]: I1121 16:01:41.146854 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:41 crc kubenswrapper[4904]: I1121 16:01:41.148146 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:41 crc kubenswrapper[4904]: I1121 16:01:41.200017 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:42 crc kubenswrapper[4904]: I1121 16:01:42.173040 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:42 crc kubenswrapper[4904]: I1121 16:01:42.243052 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qf5jn"] Nov 21 16:01:44 crc kubenswrapper[4904]: I1121 16:01:44.139497 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qf5jn" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="registry-server" containerID="cri-o://4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257" gracePeriod=2 Nov 21 16:01:44 crc kubenswrapper[4904]: I1121 16:01:44.923185 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.072240 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-catalog-content\") pod \"91aa8418-271e-4f2e-a565-359f934d9f74\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.072328 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld248\" (UniqueName: \"kubernetes.io/projected/91aa8418-271e-4f2e-a565-359f934d9f74-kube-api-access-ld248\") pod \"91aa8418-271e-4f2e-a565-359f934d9f74\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.072364 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-utilities\") pod \"91aa8418-271e-4f2e-a565-359f934d9f74\" (UID: \"91aa8418-271e-4f2e-a565-359f934d9f74\") " Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.073411 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-utilities" (OuterVolumeSpecName: "utilities") pod "91aa8418-271e-4f2e-a565-359f934d9f74" (UID: "91aa8418-271e-4f2e-a565-359f934d9f74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.080006 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91aa8418-271e-4f2e-a565-359f934d9f74-kube-api-access-ld248" (OuterVolumeSpecName: "kube-api-access-ld248") pod "91aa8418-271e-4f2e-a565-359f934d9f74" (UID: "91aa8418-271e-4f2e-a565-359f934d9f74"). InnerVolumeSpecName "kube-api-access-ld248". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.132423 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91aa8418-271e-4f2e-a565-359f934d9f74" (UID: "91aa8418-271e-4f2e-a565-359f934d9f74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.154366 4904 generic.go:334] "Generic (PLEG): container finished" podID="91aa8418-271e-4f2e-a565-359f934d9f74" containerID="4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257" exitCode=0 Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.154445 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerDied","Data":"4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257"} Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.154480 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5jn" event={"ID":"91aa8418-271e-4f2e-a565-359f934d9f74","Type":"ContainerDied","Data":"f498370b2bdebbfbe49116b0cb2841c8478fa1d82f340e0dba396fe542913810"} Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.154495 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5jn" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.154503 4904 scope.go:117] "RemoveContainer" containerID="4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.176347 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.176404 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld248\" (UniqueName: \"kubernetes.io/projected/91aa8418-271e-4f2e-a565-359f934d9f74-kube-api-access-ld248\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.176420 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91aa8418-271e-4f2e-a565-359f934d9f74-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.178609 4904 scope.go:117] "RemoveContainer" containerID="dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.198381 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qf5jn"] Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.211331 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qf5jn"] Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.228856 4904 scope.go:117] "RemoveContainer" containerID="364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.286444 4904 scope.go:117] "RemoveContainer" containerID="4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257" Nov 21 16:01:45 crc kubenswrapper[4904]: E1121 16:01:45.286987 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257\": container with ID starting with 4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257 not found: ID does not exist" containerID="4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.287056 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257"} err="failed to get container status \"4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257\": rpc error: code = NotFound desc = could not find container \"4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257\": container with ID starting with 4353ffdcce7ec8990ac7a46af084f18c34bd47de2403ccc773706e9854418257 not found: ID does not exist" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.287084 4904 scope.go:117] "RemoveContainer" containerID="dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4" Nov 21 16:01:45 crc kubenswrapper[4904]: E1121 16:01:45.287528 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4\": container with ID starting with dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4 not found: ID does not exist" containerID="dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.288079 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4"} err="failed to get container status \"dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4\": rpc error: code = NotFound desc = could not find container \"dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4\": container with ID starting with dd78b2ee9131a74357c0f2f64593cd840b72242a5b6f5e4936e73f7a6d1723a4 not found: ID does not exist" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.288129 4904 scope.go:117] "RemoveContainer" containerID="364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256" Nov 21 16:01:45 crc kubenswrapper[4904]: E1121 16:01:45.289630 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256\": container with ID starting with 364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256 not found: ID does not exist" containerID="364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256" Nov 21 16:01:45 crc kubenswrapper[4904]: I1121 16:01:45.289704 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256"} err="failed to get container status \"364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256\": rpc error: code = NotFound desc = could not find container \"364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256\": container with ID starting with 364f040618a9b94d6c3bcefef91bc887038b9b73005e3252b4d0a3a0ac1e6256 not found: ID does not exist" Nov 21 16:01:46 crc kubenswrapper[4904]: I1121 16:01:46.525305 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" path="/var/lib/kubelet/pods/91aa8418-271e-4f2e-a565-359f934d9f74/volumes" Nov 21 16:02:58 crc kubenswrapper[4904]: I1121 16:02:58.113750 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:02:58 crc kubenswrapper[4904]: I1121 16:02:58.114274 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.152901 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq87d/must-gather-4b5lz"] Nov 21 16:03:25 crc kubenswrapper[4904]: E1121 16:03:25.153884 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="registry-server" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.153896 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="registry-server" Nov 21 16:03:25 crc kubenswrapper[4904]: E1121 16:03:25.153940 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="extract-utilities" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.153946 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="extract-utilities" Nov 21 16:03:25 crc kubenswrapper[4904]: E1121 16:03:25.153962 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="extract-content" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.153968 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="extract-content" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.154162 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="91aa8418-271e-4f2e-a565-359f934d9f74" containerName="registry-server" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.199346 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.201896 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qq87d"/"openshift-service-ca.crt" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.201949 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qq87d"/"kube-root-ca.crt" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.281538 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b29dc509-2e94-4042-950a-d37c24cef5a0-must-gather-output\") pod \"must-gather-4b5lz\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.282195 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcjr\" (UniqueName: \"kubernetes.io/projected/b29dc509-2e94-4042-950a-d37c24cef5a0-kube-api-access-qmcjr\") pod \"must-gather-4b5lz\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.363924 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qq87d/must-gather-4b5lz"] Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.384704 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcjr\" (UniqueName: \"kubernetes.io/projected/b29dc509-2e94-4042-950a-d37c24cef5a0-kube-api-access-qmcjr\") pod \"must-gather-4b5lz\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.384795 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b29dc509-2e94-4042-950a-d37c24cef5a0-must-gather-output\") pod \"must-gather-4b5lz\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.385395 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b29dc509-2e94-4042-950a-d37c24cef5a0-must-gather-output\") pod \"must-gather-4b5lz\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.423813 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcjr\" (UniqueName: \"kubernetes.io/projected/b29dc509-2e94-4042-950a-d37c24cef5a0-kube-api-access-qmcjr\") pod \"must-gather-4b5lz\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:25 crc kubenswrapper[4904]: I1121 16:03:25.519839 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:03:26 crc kubenswrapper[4904]: I1121 16:03:26.074412 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qq87d/must-gather-4b5lz"] Nov 21 16:03:26 crc kubenswrapper[4904]: I1121 16:03:26.153493 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/must-gather-4b5lz" event={"ID":"b29dc509-2e94-4042-950a-d37c24cef5a0","Type":"ContainerStarted","Data":"3a010276aa6d30dc4aa0d0b1d8bc41e0e2cd09beac320cf86de85170a1dec137"} Nov 21 16:03:28 crc kubenswrapper[4904]: I1121 16:03:28.113765 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:03:28 crc kubenswrapper[4904]: I1121 16:03:28.114357 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:03:28 crc kubenswrapper[4904]: I1121 16:03:28.176218 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/must-gather-4b5lz" event={"ID":"b29dc509-2e94-4042-950a-d37c24cef5a0","Type":"ContainerStarted","Data":"16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8"} Nov 21 16:03:28 crc kubenswrapper[4904]: I1121 16:03:28.176305 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/must-gather-4b5lz" event={"ID":"b29dc509-2e94-4042-950a-d37c24cef5a0","Type":"ContainerStarted","Data":"3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc"} Nov 21 16:03:28 crc kubenswrapper[4904]: I1121 16:03:28.200676 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qq87d/must-gather-4b5lz" podStartSLOduration=3.20063472 podStartE2EDuration="3.20063472s" podCreationTimestamp="2025-11-21 16:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 16:03:28.190084393 +0000 UTC m=+9082.311616945" watchObservedRunningTime="2025-11-21 16:03:28.20063472 +0000 UTC m=+9082.322167272" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.077922 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq87d/crc-debug-wncjc"] Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.081258 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.083987 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qq87d"/"default-dockercfg-6p2rf" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.205549 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be00288a-dc3e-4538-ab86-9119593b2995-host\") pod \"crc-debug-wncjc\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.205703 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24tm7\" (UniqueName: \"kubernetes.io/projected/be00288a-dc3e-4538-ab86-9119593b2995-kube-api-access-24tm7\") pod \"crc-debug-wncjc\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.308559 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be00288a-dc3e-4538-ab86-9119593b2995-host\") pod \"crc-debug-wncjc\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.308702 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24tm7\" (UniqueName: \"kubernetes.io/projected/be00288a-dc3e-4538-ab86-9119593b2995-kube-api-access-24tm7\") pod \"crc-debug-wncjc\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.321255 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be00288a-dc3e-4538-ab86-9119593b2995-host\") pod \"crc-debug-wncjc\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.330198 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24tm7\" (UniqueName: \"kubernetes.io/projected/be00288a-dc3e-4538-ab86-9119593b2995-kube-api-access-24tm7\") pod \"crc-debug-wncjc\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: I1121 16:03:34.507165 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:03:34 crc kubenswrapper[4904]: W1121 16:03:34.746619 4904 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe00288a_dc3e_4538_ab86_9119593b2995.slice/crio-533d9a5ce340e497b8490f728119e9f7b9b8207727acaae107088c05827f8d1b WatchSource:0}: Error finding container 533d9a5ce340e497b8490f728119e9f7b9b8207727acaae107088c05827f8d1b: Status 404 returned error can't find the container with id 533d9a5ce340e497b8490f728119e9f7b9b8207727acaae107088c05827f8d1b Nov 21 16:03:35 crc kubenswrapper[4904]: I1121 16:03:35.250075 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-wncjc" event={"ID":"be00288a-dc3e-4538-ab86-9119593b2995","Type":"ContainerStarted","Data":"533d9a5ce340e497b8490f728119e9f7b9b8207727acaae107088c05827f8d1b"} Nov 21 16:03:37 crc kubenswrapper[4904]: I1121 16:03:37.270320 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-wncjc" event={"ID":"be00288a-dc3e-4538-ab86-9119593b2995","Type":"ContainerStarted","Data":"e2f18d02d26a56f000512e37f496ee408eb7c9c0027e3e0c5aee39b0691876c4"} Nov 21 16:03:37 crc kubenswrapper[4904]: I1121 16:03:37.287884 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qq87d/crc-debug-wncjc" podStartSLOduration=3.28786351 podStartE2EDuration="3.28786351s" podCreationTimestamp="2025-11-21 16:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 16:03:37.286380404 +0000 UTC m=+9091.407912956" watchObservedRunningTime="2025-11-21 16:03:37.28786351 +0000 UTC m=+9091.409396062" Nov 21 16:03:58 crc kubenswrapper[4904]: I1121 16:03:58.114145 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:03:58 crc kubenswrapper[4904]: I1121 16:03:58.114722 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:03:58 crc kubenswrapper[4904]: I1121 16:03:58.114773 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 16:03:58 crc kubenswrapper[4904]: I1121 16:03:58.115735 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 16:03:58 crc kubenswrapper[4904]: I1121 16:03:58.115790 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" gracePeriod=600 Nov 21 16:03:59 crc kubenswrapper[4904]: I1121 16:03:59.541016 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" exitCode=0 Nov 21 16:03:59 crc kubenswrapper[4904]: I1121 16:03:59.541108 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14"} Nov 21 16:03:59 crc kubenswrapper[4904]: I1121 16:03:59.541675 4904 scope.go:117] "RemoveContainer" containerID="ea4c4cca1717a8906d46b06506e812168b3bae84196c63b90056b1bc268c0f36" Nov 21 16:04:00 crc kubenswrapper[4904]: E1121 16:04:00.141988 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:04:00 crc kubenswrapper[4904]: I1121 16:04:00.567683 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:04:00 crc kubenswrapper[4904]: E1121 16:04:00.568707 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:04:12 crc kubenswrapper[4904]: I1121 16:04:12.513041 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:04:12 crc kubenswrapper[4904]: E1121 16:04:12.515800 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:04:26 crc kubenswrapper[4904]: I1121 16:04:26.522748 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:04:26 crc kubenswrapper[4904]: E1121 16:04:26.523668 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:04:38 crc kubenswrapper[4904]: I1121 16:04:38.513533 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:04:38 crc kubenswrapper[4904]: E1121 16:04:38.514295 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:04:40 crc kubenswrapper[4904]: I1121 16:04:40.042023 4904 generic.go:334] "Generic (PLEG): container finished" podID="be00288a-dc3e-4538-ab86-9119593b2995" containerID="e2f18d02d26a56f000512e37f496ee408eb7c9c0027e3e0c5aee39b0691876c4" exitCode=0 Nov 21 16:04:40 crc kubenswrapper[4904]: I1121 16:04:40.042097 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-wncjc" event={"ID":"be00288a-dc3e-4538-ab86-9119593b2995","Type":"ContainerDied","Data":"e2f18d02d26a56f000512e37f496ee408eb7c9c0027e3e0c5aee39b0691876c4"} Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.166503 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.201447 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq87d/crc-debug-wncjc"] Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.211009 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq87d/crc-debug-wncjc"] Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.252238 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24tm7\" (UniqueName: \"kubernetes.io/projected/be00288a-dc3e-4538-ab86-9119593b2995-kube-api-access-24tm7\") pod \"be00288a-dc3e-4538-ab86-9119593b2995\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.252422 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be00288a-dc3e-4538-ab86-9119593b2995-host\") pod \"be00288a-dc3e-4538-ab86-9119593b2995\" (UID: \"be00288a-dc3e-4538-ab86-9119593b2995\") " Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.252595 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be00288a-dc3e-4538-ab86-9119593b2995-host" (OuterVolumeSpecName: "host") pod "be00288a-dc3e-4538-ab86-9119593b2995" (UID: "be00288a-dc3e-4538-ab86-9119593b2995"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.253323 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be00288a-dc3e-4538-ab86-9119593b2995-host\") on node \"crc\" DevicePath \"\"" Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.259630 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be00288a-dc3e-4538-ab86-9119593b2995-kube-api-access-24tm7" (OuterVolumeSpecName: "kube-api-access-24tm7") pod "be00288a-dc3e-4538-ab86-9119593b2995" (UID: "be00288a-dc3e-4538-ab86-9119593b2995"). InnerVolumeSpecName "kube-api-access-24tm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:04:41 crc kubenswrapper[4904]: I1121 16:04:41.354917 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24tm7\" (UniqueName: \"kubernetes.io/projected/be00288a-dc3e-4538-ab86-9119593b2995-kube-api-access-24tm7\") on node \"crc\" DevicePath \"\"" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.063746 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533d9a5ce340e497b8490f728119e9f7b9b8207727acaae107088c05827f8d1b" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.063805 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-wncjc" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.367118 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq87d/crc-debug-j8sjm"] Nov 21 16:04:42 crc kubenswrapper[4904]: E1121 16:04:42.367549 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be00288a-dc3e-4538-ab86-9119593b2995" containerName="container-00" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.367561 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="be00288a-dc3e-4538-ab86-9119593b2995" containerName="container-00" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.367811 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="be00288a-dc3e-4538-ab86-9119593b2995" containerName="container-00" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.368542 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.370434 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qq87d"/"default-dockercfg-6p2rf" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.481831 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj9vt\" (UniqueName: \"kubernetes.io/projected/71ff446e-2275-4b62-8e4e-978a472b2195-kube-api-access-lj9vt\") pod \"crc-debug-j8sjm\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.482023 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71ff446e-2275-4b62-8e4e-978a472b2195-host\") pod \"crc-debug-j8sjm\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.539301 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be00288a-dc3e-4538-ab86-9119593b2995" path="/var/lib/kubelet/pods/be00288a-dc3e-4538-ab86-9119593b2995/volumes" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.585772 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj9vt\" (UniqueName: \"kubernetes.io/projected/71ff446e-2275-4b62-8e4e-978a472b2195-kube-api-access-lj9vt\") pod \"crc-debug-j8sjm\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.585866 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71ff446e-2275-4b62-8e4e-978a472b2195-host\") pod \"crc-debug-j8sjm\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.585955 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71ff446e-2275-4b62-8e4e-978a472b2195-host\") pod \"crc-debug-j8sjm\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.615897 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj9vt\" (UniqueName: \"kubernetes.io/projected/71ff446e-2275-4b62-8e4e-978a472b2195-kube-api-access-lj9vt\") pod \"crc-debug-j8sjm\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:42 crc kubenswrapper[4904]: I1121 16:04:42.688685 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:43 crc kubenswrapper[4904]: I1121 16:04:43.076144 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" event={"ID":"71ff446e-2275-4b62-8e4e-978a472b2195","Type":"ContainerStarted","Data":"a4f86ded64ae4a1dea8b560dd0f2b854035f76a4e93407f63f8ebd84989c9fa8"} Nov 21 16:04:43 crc kubenswrapper[4904]: I1121 16:04:43.076507 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" event={"ID":"71ff446e-2275-4b62-8e4e-978a472b2195","Type":"ContainerStarted","Data":"5d8f259730d8a18345d3871a78ee3fa16c3ef21ce828e560b22c30661a6687b0"} Nov 21 16:04:43 crc kubenswrapper[4904]: I1121 16:04:43.096370 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" podStartSLOduration=1.09634262 podStartE2EDuration="1.09634262s" podCreationTimestamp="2025-11-21 16:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 16:04:43.091478223 +0000 UTC m=+9157.213010815" watchObservedRunningTime="2025-11-21 16:04:43.09634262 +0000 UTC m=+9157.217875182" Nov 21 16:04:44 crc kubenswrapper[4904]: I1121 16:04:44.089280 4904 generic.go:334] "Generic (PLEG): container finished" podID="71ff446e-2275-4b62-8e4e-978a472b2195" containerID="a4f86ded64ae4a1dea8b560dd0f2b854035f76a4e93407f63f8ebd84989c9fa8" exitCode=0 Nov 21 16:04:44 crc kubenswrapper[4904]: I1121 16:04:44.089365 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" event={"ID":"71ff446e-2275-4b62-8e4e-978a472b2195","Type":"ContainerDied","Data":"a4f86ded64ae4a1dea8b560dd0f2b854035f76a4e93407f63f8ebd84989c9fa8"} Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.229641 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.348889 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71ff446e-2275-4b62-8e4e-978a472b2195-host\") pod \"71ff446e-2275-4b62-8e4e-978a472b2195\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.348956 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71ff446e-2275-4b62-8e4e-978a472b2195-host" (OuterVolumeSpecName: "host") pod "71ff446e-2275-4b62-8e4e-978a472b2195" (UID: "71ff446e-2275-4b62-8e4e-978a472b2195"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.349593 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj9vt\" (UniqueName: \"kubernetes.io/projected/71ff446e-2275-4b62-8e4e-978a472b2195-kube-api-access-lj9vt\") pod \"71ff446e-2275-4b62-8e4e-978a472b2195\" (UID: \"71ff446e-2275-4b62-8e4e-978a472b2195\") " Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.350460 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71ff446e-2275-4b62-8e4e-978a472b2195-host\") on node \"crc\" DevicePath \"\"" Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.364278 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ff446e-2275-4b62-8e4e-978a472b2195-kube-api-access-lj9vt" (OuterVolumeSpecName: "kube-api-access-lj9vt") pod "71ff446e-2275-4b62-8e4e-978a472b2195" (UID: "71ff446e-2275-4b62-8e4e-978a472b2195"). InnerVolumeSpecName "kube-api-access-lj9vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.452160 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj9vt\" (UniqueName: \"kubernetes.io/projected/71ff446e-2275-4b62-8e4e-978a472b2195-kube-api-access-lj9vt\") on node \"crc\" DevicePath \"\"" Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.741661 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq87d/crc-debug-j8sjm"] Nov 21 16:04:45 crc kubenswrapper[4904]: I1121 16:04:45.754139 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq87d/crc-debug-j8sjm"] Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.111742 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8f259730d8a18345d3871a78ee3fa16c3ef21ce828e560b22c30661a6687b0" Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.111799 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-j8sjm" Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.528357 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ff446e-2275-4b62-8e4e-978a472b2195" path="/var/lib/kubelet/pods/71ff446e-2275-4b62-8e4e-978a472b2195/volumes" Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.995779 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qq87d/crc-debug-v4d75"] Nov 21 16:04:46 crc kubenswrapper[4904]: E1121 16:04:46.996325 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ff446e-2275-4b62-8e4e-978a472b2195" containerName="container-00" Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.996348 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ff446e-2275-4b62-8e4e-978a472b2195" containerName="container-00" Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.996760 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ff446e-2275-4b62-8e4e-978a472b2195" containerName="container-00" Nov 21 16:04:46 crc kubenswrapper[4904]: I1121 16:04:46.997828 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.001456 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qq87d"/"default-dockercfg-6p2rf" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.091264 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22a90787-44be-4479-ac86-311e4d418d31-host\") pod \"crc-debug-v4d75\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.091606 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brhl7\" (UniqueName: \"kubernetes.io/projected/22a90787-44be-4479-ac86-311e4d418d31-kube-api-access-brhl7\") pod \"crc-debug-v4d75\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.194180 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22a90787-44be-4479-ac86-311e4d418d31-host\") pod \"crc-debug-v4d75\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.194300 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22a90787-44be-4479-ac86-311e4d418d31-host\") pod \"crc-debug-v4d75\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.194570 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brhl7\" (UniqueName: \"kubernetes.io/projected/22a90787-44be-4479-ac86-311e4d418d31-kube-api-access-brhl7\") pod \"crc-debug-v4d75\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.220394 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brhl7\" (UniqueName: \"kubernetes.io/projected/22a90787-44be-4479-ac86-311e4d418d31-kube-api-access-brhl7\") pod \"crc-debug-v4d75\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:47 crc kubenswrapper[4904]: I1121 16:04:47.320955 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:48 crc kubenswrapper[4904]: I1121 16:04:48.131255 4904 generic.go:334] "Generic (PLEG): container finished" podID="22a90787-44be-4479-ac86-311e4d418d31" containerID="82bd29c87c942d64018181653337ce398d4dcc96e3abb54b08bdd3264b90a486" exitCode=0 Nov 21 16:04:48 crc kubenswrapper[4904]: I1121 16:04:48.131342 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-v4d75" event={"ID":"22a90787-44be-4479-ac86-311e4d418d31","Type":"ContainerDied","Data":"82bd29c87c942d64018181653337ce398d4dcc96e3abb54b08bdd3264b90a486"} Nov 21 16:04:48 crc kubenswrapper[4904]: I1121 16:04:48.131566 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/crc-debug-v4d75" event={"ID":"22a90787-44be-4479-ac86-311e4d418d31","Type":"ContainerStarted","Data":"3169f68b2f8cbb13a41e0986a9de04dceb2739c9576bad86d0d88f6c60422355"} Nov 21 16:04:48 crc kubenswrapper[4904]: I1121 16:04:48.178987 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq87d/crc-debug-v4d75"] Nov 21 16:04:48 crc kubenswrapper[4904]: I1121 16:04:48.189112 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq87d/crc-debug-v4d75"] Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.268643 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.444940 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22a90787-44be-4479-ac86-311e4d418d31-host\") pod \"22a90787-44be-4479-ac86-311e4d418d31\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.445087 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22a90787-44be-4479-ac86-311e4d418d31-host" (OuterVolumeSpecName: "host") pod "22a90787-44be-4479-ac86-311e4d418d31" (UID: "22a90787-44be-4479-ac86-311e4d418d31"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.445275 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brhl7\" (UniqueName: \"kubernetes.io/projected/22a90787-44be-4479-ac86-311e4d418d31-kube-api-access-brhl7\") pod \"22a90787-44be-4479-ac86-311e4d418d31\" (UID: \"22a90787-44be-4479-ac86-311e4d418d31\") " Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.445989 4904 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/22a90787-44be-4479-ac86-311e4d418d31-host\") on node \"crc\" DevicePath \"\"" Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.450836 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22a90787-44be-4479-ac86-311e4d418d31-kube-api-access-brhl7" (OuterVolumeSpecName: "kube-api-access-brhl7") pod "22a90787-44be-4479-ac86-311e4d418d31" (UID: "22a90787-44be-4479-ac86-311e4d418d31"). InnerVolumeSpecName "kube-api-access-brhl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:04:49 crc kubenswrapper[4904]: I1121 16:04:49.548454 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brhl7\" (UniqueName: \"kubernetes.io/projected/22a90787-44be-4479-ac86-311e4d418d31-kube-api-access-brhl7\") on node \"crc\" DevicePath \"\"" Nov 21 16:04:50 crc kubenswrapper[4904]: I1121 16:04:50.152529 4904 scope.go:117] "RemoveContainer" containerID="82bd29c87c942d64018181653337ce398d4dcc96e3abb54b08bdd3264b90a486" Nov 21 16:04:50 crc kubenswrapper[4904]: I1121 16:04:50.152821 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/crc-debug-v4d75" Nov 21 16:04:50 crc kubenswrapper[4904]: I1121 16:04:50.527389 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22a90787-44be-4479-ac86-311e4d418d31" path="/var/lib/kubelet/pods/22a90787-44be-4479-ac86-311e4d418d31/volumes" Nov 21 16:04:53 crc kubenswrapper[4904]: I1121 16:04:53.513841 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:04:53 crc kubenswrapper[4904]: E1121 16:04:53.514767 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:05:06 crc kubenswrapper[4904]: I1121 16:05:06.521187 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:05:06 crc kubenswrapper[4904]: E1121 16:05:06.521973 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:05:20 crc kubenswrapper[4904]: I1121 16:05:20.514025 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:05:20 crc kubenswrapper[4904]: E1121 16:05:20.514985 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:05:34 crc kubenswrapper[4904]: I1121 16:05:34.514094 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:05:34 crc kubenswrapper[4904]: E1121 16:05:34.515010 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.144769 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-api/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.271414 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-evaluator/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.358361 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-listener/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.381954 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_53e60150-0305-4106-8864-769576a7016c/aodh-notifier/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.523301 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:05:46 crc kubenswrapper[4904]: E1121 16:05:46.523578 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.526281 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f4785b448-tkwjh_5533e799-de81-4937-a2ca-8876e2bf3c22/barbican-api/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.586926 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f4785b448-tkwjh_5533e799-de81-4937-a2ca-8876e2bf3c22/barbican-api-log/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.673842 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75d77468c8-8lrt8_8b360c72-5d34-4e63-b653-3f3e80539384/barbican-keystone-listener/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.886938 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75d77468c8-8lrt8_8b360c72-5d34-4e63-b653-3f3e80539384/barbican-keystone-listener-log/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.896332 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f6485bcff-l7ccc_9a6718b1-07c1-4270-a030-cc36bef71bbc/barbican-worker/0.log" Nov 21 16:05:46 crc kubenswrapper[4904]: I1121 16:05:46.981300 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7f6485bcff-l7ccc_9a6718b1-07c1-4270-a030-cc36bef71bbc/barbican-worker-log/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.222786 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-6jq8q_0a28f51f-d088-4c6b-aec9-9fcde8dd9b94/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.362630 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/ceilometer-central-agent/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.402611 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/ceilometer-notification-agent/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.500177 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/proxy-httpd/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.543330 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7cd84528-51ed-4df2-81cd-d0793668a01a/sg-core/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.666253 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-5mpdb_e446baad-37a3-4206-a40a-67ac35889d21/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:47 crc kubenswrapper[4904]: I1121 16:05:47.775376 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-kdggn_9eb481be-e58b-4b7a-be65-188c8c4c6d70/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.049289 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fd487d38-5efb-41f0-88f9-ba5360b8c3cf/cinder-api/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.072621 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_fd487d38-5efb-41f0-88f9-ba5360b8c3cf/cinder-api-log/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.388731 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4/probe/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.451065 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2b6933e2-83af-4bab-b20b-498a29cc1a68/cinder-scheduler/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.525051 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3b93a0a1-7379-4d5f-82a8-0bfd9c270ad4/cinder-backup/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.718600 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2b6933e2-83af-4bab-b20b-498a29cc1a68/probe/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.850949 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_186f682e-35c9-47ac-8e62-264769272a1b/cinder-volume/0.log" Nov 21 16:05:48 crc kubenswrapper[4904]: I1121 16:05:48.874888 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_186f682e-35c9-47ac-8e62-264769272a1b/probe/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.032824 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jmxqm_48746c32-dedf-4967-ba11-9765f1a17ec7/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.117915 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-svg6h_f292a2fb-beff-4c38-891a-db1e34c7157d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.284939 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5767ddb7c-mbpns_a141634c-9846-4d10-89f0-a5a28a50d016/init/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.494956 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5767ddb7c-mbpns_a141634c-9846-4d10-89f0-a5a28a50d016/init/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.628716 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8cf362a-fd51-4493-967e-aa0462ce4007/glance-httpd/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.657483 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5767ddb7c-mbpns_a141634c-9846-4d10-89f0-a5a28a50d016/dnsmasq-dns/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.721943 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d8cf362a-fd51-4493-967e-aa0462ce4007/glance-log/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.878934 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e079f5ff-96a7-417b-bc95-a4578fe3a4ec/glance-httpd/0.log" Nov 21 16:05:49 crc kubenswrapper[4904]: I1121 16:05:49.913265 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_e079f5ff-96a7-417b-bc95-a4578fe3a4ec/glance-log/0.log" Nov 21 16:05:50 crc kubenswrapper[4904]: I1121 16:05:50.267001 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5bcc58b9d9-fdhrg_c9c258f6-6c0c-4072-bd71-610209d2bbb9/heat-engine/0.log" Nov 21 16:05:50 crc kubenswrapper[4904]: I1121 16:05:50.736115 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-695bd477bb-gcrxw_7fc8baac-d51a-42f4-9444-c8e4172be134/horizon/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.000636 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-ts5dx_2e2f263f-26f6-4c10-b020-898f112d23d6/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.250065 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qzzvq_5863ea30-41cd-46f1-9c0f-95d2367aa9aa/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.555044 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-695bd477bb-gcrxw_7fc8baac-d51a-42f4-9444-c8e4172be134/horizon-log/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.754282 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-7669f9847d-7bgvn_55eba19c-92ed-4a40-81cb-9085d6becd76/heat-cfnapi/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.770447 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395561-8g6qs_ba2790bd-7e69-4181-8240-95640e6696e4/keystone-cron/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.780030 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5bd7cc4489-fmh8v_da6b9694-5408-45ce-8e1c-3cf05c404837/heat-api/0.log" Nov 21 16:05:51 crc kubenswrapper[4904]: I1121 16:05:51.971447 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395621-lkq6r_b6002e56-ad4f-43e5-928f-602c88ed887d/keystone-cron/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.019169 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29395681-6bhx2_f97c22a6-b89d-41ef-b9ab-39839347dbb5/keystone-cron/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.195410 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a18139cc-0f50-4c9f-bbb2-7637d2a3c299/kube-state-metrics/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.200476 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b64bb974-stp6c_9b547005-5eef-4c9a-91a0-7d796e269d05/keystone-api/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.276629 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5nfqc_b39ff9bf-7bb1-4dee-9e6f-d9fe2f773a19/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.420379 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-k8zt5_16cb76b3-8c36-46f5-b221-df0d03da240e/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.556989 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9a0b60d4-f457-4e29-bb0e-f244826249aa/manila-api-log/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.768370 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8bea7feb-21d5-4b01-98b3-5b16737f0274/probe/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.774697 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_9a0b60d4-f457-4e29-bb0e-f244826249aa/manila-api/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.817796 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_8bea7feb-21d5-4b01-98b3-5b16737f0274/manila-scheduler/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.988233 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f2e2192d-1157-4025-9df6-deab99f244fd/manila-share/0.log" Nov 21 16:05:52 crc kubenswrapper[4904]: I1121 16:05:52.995750 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_f2e2192d-1157-4025-9df6-deab99f244fd/probe/0.log" Nov 21 16:05:53 crc kubenswrapper[4904]: I1121 16:05:53.197950 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_78e4e986-d20d-4494-bff4-3cc0fb8af825/mysqld-exporter/0.log" Nov 21 16:05:53 crc kubenswrapper[4904]: I1121 16:05:53.621807 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bc5bc9bb5-xls6r_9814cd75-e32a-40e2-9509-08d8256ee1c7/neutron-httpd/0.log" Nov 21 16:05:53 crc kubenswrapper[4904]: I1121 16:05:53.629162 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-782sq_7cc82b71-e69b-4404-843c-afdc4b449ab4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:53 crc kubenswrapper[4904]: I1121 16:05:53.641028 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5bc5bc9bb5-xls6r_9814cd75-e32a-40e2-9509-08d8256ee1c7/neutron-api/0.log" Nov 21 16:05:54 crc kubenswrapper[4904]: I1121 16:05:54.390708 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_cbabfd9a-3db8-4b71-886c-1986df601c51/nova-cell0-conductor-conductor/0.log" Nov 21 16:05:54 crc kubenswrapper[4904]: I1121 16:05:54.734031 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_89e907ba-30e5-4c8e-921d-6560b56a80d8/nova-cell1-conductor-conductor/0.log" Nov 21 16:05:54 crc kubenswrapper[4904]: I1121 16:05:54.736144 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9cab507d-ee28-4da3-9ed5-524c74530da5/nova-api-log/0.log" Nov 21 16:05:55 crc kubenswrapper[4904]: I1121 16:05:55.155986 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-nklhl_f238fb8b-7193-4412-ac72-19c3161f2735/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:55 crc kubenswrapper[4904]: I1121 16:05:55.219007 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_72657d59-241e-443a-9a53-f9c794b67958/nova-cell1-novncproxy-novncproxy/0.log" Nov 21 16:05:55 crc kubenswrapper[4904]: I1121 16:05:55.486731 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_56ae0c0e-1f08-4114-adf0-e4a915d519aa/nova-metadata-log/0.log" Nov 21 16:05:55 crc kubenswrapper[4904]: I1121 16:05:55.850113 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9cab507d-ee28-4da3-9ed5-524c74530da5/nova-api-api/0.log" Nov 21 16:05:55 crc kubenswrapper[4904]: I1121 16:05:55.988879 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e6891983-1117-4522-9431-2c73ac552c8a/nova-scheduler-scheduler/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.044635 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2987646d-06ff-44a7-b766-ff6ff19ed796/mysql-bootstrap/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.218790 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2987646d-06ff-44a7-b766-ff6ff19ed796/mysql-bootstrap/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.225089 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_2987646d-06ff-44a7-b766-ff6ff19ed796/galera/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.445939 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_554709ef-9d18-4a19-aded-0c8fe94e30e8/mysql-bootstrap/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.611790 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_554709ef-9d18-4a19-aded-0c8fe94e30e8/mysql-bootstrap/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.725683 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_554709ef-9d18-4a19-aded-0c8fe94e30e8/galera/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.819159 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_db36aaca-d216-45b3-b8f1-f7a94bae89e6/openstackclient/0.log" Nov 21 16:05:56 crc kubenswrapper[4904]: I1121 16:05:56.975947 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gh8lv_48ec880a-b9f8-4b7a-9d69-98b730a07a02/ovn-controller/0.log" Nov 21 16:05:57 crc kubenswrapper[4904]: I1121 16:05:57.205436 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-tp8md_c8dbdbe2-57e0-435a-ab3e-4dd526d584b3/openstack-network-exporter/0.log" Nov 21 16:05:57 crc kubenswrapper[4904]: I1121 16:05:57.646909 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovsdb-server-init/0.log" Nov 21 16:05:57 crc kubenswrapper[4904]: I1121 16:05:57.781191 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovsdb-server-init/0.log" Nov 21 16:05:57 crc kubenswrapper[4904]: I1121 16:05:57.829077 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3f2e9b6a-f2dd-4647-9652-e5f609740a53/memcached/0.log" Nov 21 16:05:57 crc kubenswrapper[4904]: I1121 16:05:57.865466 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovs-vswitchd/0.log" Nov 21 16:05:57 crc kubenswrapper[4904]: I1121 16:05:57.876477 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-jt79x_6f19242e-f99f-4408-9c53-7a92a8c191bc/ovsdb-server/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.117315 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-qcxnt_c336a560-11e3-4740-b2f3-ebc5203fb0ad/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.134089 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5208bbb4-5fe6-4507-9308-35202ce25115/openstack-network-exporter/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.329900 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5208bbb4-5fe6-4507-9308-35202ce25115/ovn-northd/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.356745 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1ac2319f-b7ee-441c-b325-5ca2d83c87e4/openstack-network-exporter/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.425336 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1ac2319f-b7ee-441c-b325-5ca2d83c87e4/ovsdbserver-nb/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.525576 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:05:58 crc kubenswrapper[4904]: E1121 16:05:58.526124 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.614831 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b7f4b26a-f41d-478c-b706-67baa265aaf8/openstack-network-exporter/0.log" Nov 21 16:05:58 crc kubenswrapper[4904]: I1121 16:05:58.684544 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b7f4b26a-f41d-478c-b706-67baa265aaf8/ovsdbserver-sb/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.046858 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/init-config-reloader/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.053467 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f758bd6b6-r5mk2_9653a120-e74b-4330-8e8f-faf95b13f63e/placement-api/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.202114 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f758bd6b6-r5mk2_9653a120-e74b-4330-8e8f-faf95b13f63e/placement-log/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.263920 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/init-config-reloader/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.291590 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_56ae0c0e-1f08-4114-adf0-e4a915d519aa/nova-metadata-metadata/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.305287 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/config-reloader/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.345263 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/prometheus/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.447801 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_804be691-8422-4cf8-bfc1-47a1f3c02294/thanos-sidecar/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.543745 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b1f6e46-f0d4-421a-bb86-48f1d622cd97/setup-container/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.722939 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b1f6e46-f0d4-421a-bb86-48f1d622cd97/setup-container/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.758377 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b1f6e46-f0d4-421a-bb86-48f1d622cd97/rabbitmq/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.801729 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fdcbae10-10ee-4213-8758-ce56fbe6a27e/setup-container/0.log" Nov 21 16:05:59 crc kubenswrapper[4904]: I1121 16:05:59.945398 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fdcbae10-10ee-4213-8758-ce56fbe6a27e/setup-container/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.021094 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-75b7t_facb16e4-106b-43c2-a62c-92103c2137ee/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.025120 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_fdcbae10-10ee-4213-8758-ce56fbe6a27e/rabbitmq/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.159614 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-krjfc_620a1fcb-b550-41c2-964f-f3212ff8a2d0/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.219981 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8plpf_b17ec983-5d7e-4e15-807e-393999d4aa0e/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.291054 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-kzpz9_52c0e44a-c467-4a11-a4f9-21f59b8cd3c5/ssh-known-hosts-edpm-deployment/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.504590 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f94fcfbbf-pg26j_e819c802-1c71-4668-bc99-5b41cc11c656/proxy-server/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.621482 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-x9zjr_c847f685-6c92-40df-9608-675e8f21c058/swift-ring-rebalance/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.718729 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f94fcfbbf-pg26j_e819c802-1c71-4668-bc99-5b41cc11c656/proxy-httpd/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.797375 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-reaper/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.842610 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-auditor/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.916520 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-replicator/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.946186 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/account-server/0.log" Nov 21 16:06:00 crc kubenswrapper[4904]: I1121 16:06:00.989937 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-auditor/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.031498 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-replicator/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.047241 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-server/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.121250 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/container-updater/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.230039 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-expirer/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.230945 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-auditor/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.244361 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-server/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.262424 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-replicator/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.328292 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/object-updater/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.428115 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/rsync/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.471965 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_00f1db1a-ab1c-4db3-9e52-f01d15b3c6ae/swift-recon-cron/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.512526 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-49lxz_168c7941-23ec-43f5-8849-04a31a928d0a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.646632 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-xsxqc_82b0bfd1-2b9d-48c8-89dd-74db2d011083/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:06:01 crc kubenswrapper[4904]: I1121 16:06:01.861443 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_6675a319-1e0d-4549-b2f0-8e5307295f66/test-operator-logs-container/0.log" Nov 21 16:06:02 crc kubenswrapper[4904]: I1121 16:06:02.018344 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-2qlv2_81de75d6-c869-454d-a2a0-09557d478c99/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 21 16:06:02 crc kubenswrapper[4904]: I1121 16:06:02.375880 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_24d17dbe-f722-4f81-b271-ca1e191780a8/tempest-tests-tempest-tests-runner/0.log" Nov 21 16:06:11 crc kubenswrapper[4904]: I1121 16:06:11.512782 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:06:11 crc kubenswrapper[4904]: E1121 16:06:11.513717 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:06:25 crc kubenswrapper[4904]: I1121 16:06:25.515321 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:06:25 crc kubenswrapper[4904]: E1121 16:06:25.516457 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.434550 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/util/0.log" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.612894 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/util/0.log" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.627035 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/pull/0.log" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.653381 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/pull/0.log" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.844725 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/util/0.log" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.851613 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/pull/0.log" Nov 21 16:06:26 crc kubenswrapper[4904]: I1121 16:06:26.868516 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1417907bb84a965befac844e2add7a4c2fae2daeee775850d35be638bdbp5gr_def3a46f-3c06-4814-b9b9-a702bf229dcf/extract/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.022790 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-j9dcf_50f3313d-ff99-4b0e-931a-c2a774375ae3/kube-rbac-proxy/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.144499 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-hm8cc_4b290147-91ef-4734-961b-b61487960c33/kube-rbac-proxy/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.155009 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-j9dcf_50f3313d-ff99-4b0e-931a-c2a774375ae3/manager/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.320399 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-hm8cc_4b290147-91ef-4734-961b-b61487960c33/manager/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.404792 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-lnp8w_14e3fbea-6dc2-44a4-81be-dfda27a6cdd8/kube-rbac-proxy/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.444045 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-lnp8w_14e3fbea-6dc2-44a4-81be-dfda27a6cdd8/manager/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.577913 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-bnt22_b91d5a1c-a2d5-4875-a23a-e43ae7f18937/kube-rbac-proxy/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.691605 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-bnt22_b91d5a1c-a2d5-4875-a23a-e43ae7f18937/manager/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.859453 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-ncvqt_55b86375-94e3-4c12-96b9-c5f581b3d8f3/kube-rbac-proxy/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.931501 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-ncvqt_55b86375-94e3-4c12-96b9-c5f581b3d8f3/manager/0.log" Nov 21 16:06:27 crc kubenswrapper[4904]: I1121 16:06:27.962419 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jp6bp_d81ae352-08d2-433c-b883-deeb78888945/kube-rbac-proxy/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.130777 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-jp6bp_d81ae352-08d2-433c-b883-deeb78888945/manager/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.190696 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-h78bp_64f35f86-9389-4506-ad53-d42eec926447/kube-rbac-proxy/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.430467 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-h78bp_64f35f86-9389-4506-ad53-d42eec926447/manager/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.448249 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-wvfgt_15e30337-bd79-4d01-b3ab-2177d3c0609b/manager/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.476767 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-wvfgt_15e30337-bd79-4d01-b3ab-2177d3c0609b/kube-rbac-proxy/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.870324 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nkqwb_37bdccc0-c16d-4523-94d7-978d8313ca7f/manager/0.log" Nov 21 16:06:28 crc kubenswrapper[4904]: I1121 16:06:28.933185 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-nkqwb_37bdccc0-c16d-4523-94d7-978d8313ca7f/kube-rbac-proxy/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.078719 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xr5mv_98bcfebc-c45f-4a2e-a21f-9c8cf892898c/kube-rbac-proxy/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.192585 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-xr5mv_98bcfebc-c45f-4a2e-a21f-9c8cf892898c/manager/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.282514 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tg85w_72dacec1-d81b-46df-acd2-962095286389/kube-rbac-proxy/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.308945 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-tg85w_72dacec1-d81b-46df-acd2-962095286389/manager/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.453585 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b8xrh_301f4657-8519-4071-a82b-b35f80739372/kube-rbac-proxy/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.578335 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-b8xrh_301f4657-8519-4071-a82b-b35f80739372/manager/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.701869 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-n2t2q_4dc68816-ed56-4e8b-a41b-b91868bc57d3/kube-rbac-proxy/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.819771 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6bxqf_c8300ff9-666c-4d85-bd43-120d41529215/kube-rbac-proxy/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.851275 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-n2t2q_4dc68816-ed56-4e8b-a41b-b91868bc57d3/manager/0.log" Nov 21 16:06:29 crc kubenswrapper[4904]: I1121 16:06:29.949194 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-6bxqf_c8300ff9-666c-4d85-bd43-120d41529215/manager/0.log" Nov 21 16:06:30 crc kubenswrapper[4904]: I1121 16:06:30.070892 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t_ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd/kube-rbac-proxy/0.log" Nov 21 16:06:30 crc kubenswrapper[4904]: I1121 16:06:30.089923 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-h7s6t_ca0c7d8f-c859-4b3c-9ae7-8229830a1fcd/manager/0.log" Nov 21 16:06:30 crc kubenswrapper[4904]: I1121 16:06:30.569015 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7gkpw_bd5157a7-e11b-444d-8e6b-d96382adc923/registry-server/0.log" Nov 21 16:06:30 crc kubenswrapper[4904]: I1121 16:06:30.590110 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7bc9ddc77b-wmm4f_760055d2-e646-4466-a667-90292a69546a/operator/0.log" Nov 21 16:06:30 crc kubenswrapper[4904]: I1121 16:06:30.725430 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-qmmmc_887b5387-ce64-43b1-8755-2c401719a2d6/kube-rbac-proxy/0.log" Nov 21 16:06:30 crc kubenswrapper[4904]: I1121 16:06:30.897183 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-qmmmc_887b5387-ce64-43b1-8755-2c401719a2d6/manager/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.015471 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-lrfmt_27625471-8f27-449b-a245-558e079a38ab/kube-rbac-proxy/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.017071 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-lrfmt_27625471-8f27-449b-a245-558e079a38ab/manager/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.168816 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5lzvr_2e76f101-21a3-4b78-970e-55a016ee2a40/operator/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.348295 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-kpwdb_756ba318-ec48-4012-9d9d-108c3f1fad3c/kube-rbac-proxy/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.536020 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-kpwdb_756ba318-ec48-4012-9d9d-108c3f1fad3c/manager/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.571903 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_90ef3fb1-0f63-4fd0-94ef-2fefce011d23/kube-rbac-proxy/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.795645 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-vtkhq_a8dc5688-31cd-412c-91e9-3ae137d2a20a/kube-rbac-proxy/0.log" Nov 21 16:06:31 crc kubenswrapper[4904]: I1121 16:06:31.877326 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-vtkhq_a8dc5688-31cd-412c-91e9-3ae137d2a20a/manager/0.log" Nov 21 16:06:32 crc kubenswrapper[4904]: I1121 16:06:32.063277 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7fc59d4bfd-vzk7g_90ef3fb1-0f63-4fd0-94ef-2fefce011d23/manager/0.log" Nov 21 16:06:32 crc kubenswrapper[4904]: I1121 16:06:32.101008 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-57jrf_fb4141a1-4768-45ae-a8e8-ec1d1c01db4e/manager/0.log" Nov 21 16:06:32 crc kubenswrapper[4904]: I1121 16:06:32.129426 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-57jrf_fb4141a1-4768-45ae-a8e8-ec1d1c01db4e/kube-rbac-proxy/0.log" Nov 21 16:06:32 crc kubenswrapper[4904]: I1121 16:06:32.334605 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-79fb5496bb-zp56v_80a18488-07da-4b66-b164-40f7d7027b5b/manager/0.log" Nov 21 16:06:39 crc kubenswrapper[4904]: I1121 16:06:39.513575 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:06:39 crc kubenswrapper[4904]: E1121 16:06:39.514525 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:06:50 crc kubenswrapper[4904]: I1121 16:06:50.244771 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-tl29t_4b0cf0a0-036d-4e79-836e-208aa70df688/control-plane-machine-set-operator/0.log" Nov 21 16:06:50 crc kubenswrapper[4904]: I1121 16:06:50.473955 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdh9s_b29d96e3-7aa4-4626-a245-93ee36f7595f/kube-rbac-proxy/0.log" Nov 21 16:06:50 crc kubenswrapper[4904]: I1121 16:06:50.490825 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdh9s_b29d96e3-7aa4-4626-a245-93ee36f7595f/machine-api-operator/0.log" Nov 21 16:06:51 crc kubenswrapper[4904]: I1121 16:06:51.513892 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:06:51 crc kubenswrapper[4904]: E1121 16:06:51.515289 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:07:02 crc kubenswrapper[4904]: I1121 16:07:02.403532 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-w2h7s_e0eaefd8-c20d-4081-baa3-df1a92c06136/cert-manager-controller/0.log" Nov 21 16:07:02 crc kubenswrapper[4904]: I1121 16:07:02.574832 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-hz5x8_c9229a7d-9559-43dd-8470-5e0377837fa3/cert-manager-cainjector/0.log" Nov 21 16:07:02 crc kubenswrapper[4904]: I1121 16:07:02.683310 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-q7hpz_9fa4f8f4-6159-45d3-886a-c8bfa7cd6b80/cert-manager-webhook/0.log" Nov 21 16:07:05 crc kubenswrapper[4904]: I1121 16:07:05.514239 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:07:05 crc kubenswrapper[4904]: E1121 16:07:05.515202 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:07:14 crc kubenswrapper[4904]: I1121 16:07:14.162895 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-vkkht_46f05915-4deb-45e5-8f4e-109e6c633d4e/nmstate-console-plugin/0.log" Nov 21 16:07:14 crc kubenswrapper[4904]: I1121 16:07:14.362176 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-m899x_a317caad-8aa4-4620-9812-975486e6c3f6/nmstate-handler/0.log" Nov 21 16:07:14 crc kubenswrapper[4904]: I1121 16:07:14.465418 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-49ws5_ae90deaf-c915-46ad-b731-7b79e746fffa/kube-rbac-proxy/0.log" Nov 21 16:07:14 crc kubenswrapper[4904]: I1121 16:07:14.506914 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-49ws5_ae90deaf-c915-46ad-b731-7b79e746fffa/nmstate-metrics/0.log" Nov 21 16:07:14 crc kubenswrapper[4904]: I1121 16:07:14.709115 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-ql6sj_3b9ad4cd-55bf-41e4-8740-b88b8be917c5/nmstate-operator/0.log" Nov 21 16:07:14 crc kubenswrapper[4904]: I1121 16:07:14.732810 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-5j92x_d39cd226-e819-4c84-9b6b-c28b8ae7d638/nmstate-webhook/0.log" Nov 21 16:07:18 crc kubenswrapper[4904]: I1121 16:07:18.512962 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:07:18 crc kubenswrapper[4904]: E1121 16:07:18.513764 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.243413 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/manager/0.log" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.275166 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/kube-rbac-proxy/0.log" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.806455 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pt6wx"] Nov 21 16:07:26 crc kubenswrapper[4904]: E1121 16:07:26.806866 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22a90787-44be-4479-ac86-311e4d418d31" containerName="container-00" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.806884 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="22a90787-44be-4479-ac86-311e4d418d31" containerName="container-00" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.807169 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="22a90787-44be-4479-ac86-311e4d418d31" containerName="container-00" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.808692 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.824551 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt6wx"] Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.958992 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-catalog-content\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.959138 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-utilities\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:26 crc kubenswrapper[4904]: I1121 16:07:26.959184 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6v6\" (UniqueName: \"kubernetes.io/projected/239373d6-6821-45c8-86c6-9ec649f83ea4-kube-api-access-bh6v6\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.060773 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-utilities\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.060876 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh6v6\" (UniqueName: \"kubernetes.io/projected/239373d6-6821-45c8-86c6-9ec649f83ea4-kube-api-access-bh6v6\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.061045 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-catalog-content\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.061205 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-utilities\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.061608 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-catalog-content\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.100278 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh6v6\" (UniqueName: \"kubernetes.io/projected/239373d6-6821-45c8-86c6-9ec649f83ea4-kube-api-access-bh6v6\") pod \"redhat-operators-pt6wx\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:27 crc kubenswrapper[4904]: I1121 16:07:27.145838 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:28 crc kubenswrapper[4904]: I1121 16:07:28.201103 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pt6wx"] Nov 21 16:07:28 crc kubenswrapper[4904]: I1121 16:07:28.839517 4904 generic.go:334] "Generic (PLEG): container finished" podID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerID="f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8" exitCode=0 Nov 21 16:07:28 crc kubenswrapper[4904]: I1121 16:07:28.839635 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerDied","Data":"f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8"} Nov 21 16:07:28 crc kubenswrapper[4904]: I1121 16:07:28.839843 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerStarted","Data":"376fc3150f742e1ed0a8c292469080f877a823ceef71d81d40abc4db594fcd0d"} Nov 21 16:07:28 crc kubenswrapper[4904]: I1121 16:07:28.846882 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 16:07:29 crc kubenswrapper[4904]: I1121 16:07:29.852489 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerStarted","Data":"2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723"} Nov 21 16:07:33 crc kubenswrapper[4904]: I1121 16:07:33.513193 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:07:33 crc kubenswrapper[4904]: E1121 16:07:33.514034 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:07:35 crc kubenswrapper[4904]: I1121 16:07:35.909339 4904 generic.go:334] "Generic (PLEG): container finished" podID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerID="2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723" exitCode=0 Nov 21 16:07:35 crc kubenswrapper[4904]: I1121 16:07:35.909434 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerDied","Data":"2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723"} Nov 21 16:07:36 crc kubenswrapper[4904]: I1121 16:07:36.923713 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerStarted","Data":"7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b"} Nov 21 16:07:36 crc kubenswrapper[4904]: I1121 16:07:36.945085 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pt6wx" podStartSLOduration=3.462362303 podStartE2EDuration="10.945062574s" podCreationTimestamp="2025-11-21 16:07:26 +0000 UTC" firstStartedPulling="2025-11-21 16:07:28.841374424 +0000 UTC m=+9322.962906976" lastFinishedPulling="2025-11-21 16:07:36.324074695 +0000 UTC m=+9330.445607247" observedRunningTime="2025-11-21 16:07:36.941369344 +0000 UTC m=+9331.062901916" watchObservedRunningTime="2025-11-21 16:07:36.945062574 +0000 UTC m=+9331.066595136" Nov 21 16:07:37 crc kubenswrapper[4904]: I1121 16:07:37.146789 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:37 crc kubenswrapper[4904]: I1121 16:07:37.146865 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:07:38 crc kubenswrapper[4904]: I1121 16:07:38.200267 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pt6wx" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" probeResult="failure" output=< Nov 21 16:07:38 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 16:07:38 crc kubenswrapper[4904]: > Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.007201 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-pc7gc_24f73a17-3c0e-4e6b-9a16-461582908e22/cluster-logging-operator/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.260034 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-mz46k_a982be9e-6e45-4a41-b736-9a673acaf3c0/collector/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.321099 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_4177ccdd-88ab-419a-a189-2f1af9b587e3/loki-compactor/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.479461 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-6rt82_e4e29add-5250-42af-af50-40efeec82a2d/loki-distributor/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.519590 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-psz2n_fafeb02f-1864-4678-9846-d799fa1bc3c4/gateway/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.571939 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-psz2n_fafeb02f-1864-4678-9846-d799fa1bc3c4/opa/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.709197 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-zrb77_0c5c39d2-3a23-41c1-bec3-317ca97022f4/gateway/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.791125 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-5bc6fc7c85-zrb77_0c5c39d2-3a23-41c1-bec3-317ca97022f4/opa/0.log" Nov 21 16:07:40 crc kubenswrapper[4904]: I1121 16:07:40.918519 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_4b2c7f54-eb30-4db6-a7fd-c5f9764e14c6/loki-index-gateway/0.log" Nov 21 16:07:41 crc kubenswrapper[4904]: I1121 16:07:41.158207 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_fb578f45-397a-422e-af3c-04841227ef67/loki-ingester/0.log" Nov 21 16:07:41 crc kubenswrapper[4904]: I1121 16:07:41.230063 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-zd9gb_93fb0ef8-1685-4951-b2db-a0921787ff1a/loki-querier/0.log" Nov 21 16:07:41 crc kubenswrapper[4904]: I1121 16:07:41.333914 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-dlmmw_a74cf7e4-1b6c-4547-bb31-eea3e99e49a1/loki-query-frontend/0.log" Nov 21 16:07:46 crc kubenswrapper[4904]: I1121 16:07:46.513760 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:07:46 crc kubenswrapper[4904]: E1121 16:07:46.514608 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:07:48 crc kubenswrapper[4904]: I1121 16:07:48.194394 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pt6wx" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" probeResult="failure" output=< Nov 21 16:07:48 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 16:07:48 crc kubenswrapper[4904]: > Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.368888 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-q4s9b_37655929-be52-46f4-b914-4500bada3dac/kube-rbac-proxy/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.457635 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-q4s9b_37655929-be52-46f4-b914-4500bada3dac/controller/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.572257 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-9dv2t_12c5fcd1-2cbd-47f8-b16d-dc3c8e34d209/frr-k8s-webhook-server/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.676215 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.861751 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.874444 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.888518 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 16:07:54 crc kubenswrapper[4904]: I1121 16:07:54.927284 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.080256 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.083144 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.112623 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.147863 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.375871 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/controller/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.411184 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-metrics/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.419626 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-reloader/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.443371 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/cp-frr-files/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.641729 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/frr-metrics/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.644495 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/kube-rbac-proxy/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.651251 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/kube-rbac-proxy-frr/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.863628 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/reloader/0.log" Nov 21 16:07:55 crc kubenswrapper[4904]: I1121 16:07:55.886381 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c88768dbc-5mgsg_20d3a8a7-19b0-4ae0-8f88-5e16945e90db/manager/0.log" Nov 21 16:07:56 crc kubenswrapper[4904]: I1121 16:07:56.100397 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8d7cf7c8c-8rg4q_bc2aa529-f9e7-4b13-94e8-e766abd5904f/webhook-server/0.log" Nov 21 16:07:56 crc kubenswrapper[4904]: I1121 16:07:56.388774 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-sfp7g_c20302bf-7009-4422-a47b-6b7fb5b14a99/kube-rbac-proxy/0.log" Nov 21 16:07:56 crc kubenswrapper[4904]: I1121 16:07:56.945425 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-sfp7g_c20302bf-7009-4422-a47b-6b7fb5b14a99/speaker/0.log" Nov 21 16:07:57 crc kubenswrapper[4904]: I1121 16:07:57.514732 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:07:57 crc kubenswrapper[4904]: E1121 16:07:57.515025 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:07:58 crc kubenswrapper[4904]: I1121 16:07:58.207464 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pt6wx" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" probeResult="failure" output=< Nov 21 16:07:58 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 16:07:58 crc kubenswrapper[4904]: > Nov 21 16:07:58 crc kubenswrapper[4904]: I1121 16:07:58.295415 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z4mtp_31b4b535-09fb-4c70-8a63-f07880e79fb2/frr/0.log" Nov 21 16:08:08 crc kubenswrapper[4904]: I1121 16:08:08.209939 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pt6wx" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" probeResult="failure" output=< Nov 21 16:08:08 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 16:08:08 crc kubenswrapper[4904]: > Nov 21 16:08:09 crc kubenswrapper[4904]: I1121 16:08:09.792072 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/util/0.log" Nov 21 16:08:09 crc kubenswrapper[4904]: I1121 16:08:09.976913 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/pull/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.017322 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/util/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.038258 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/pull/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.227547 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/pull/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.284455 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/util/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.321408 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8jr2qz_8071e483-279a-45ad-a73e-c5d487e982d0/extract/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.442378 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/util/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.665338 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/util/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.666730 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/pull/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.666898 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/pull/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.876205 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/util/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.884347 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/extract/0.log" Nov 21 16:08:10 crc kubenswrapper[4904]: I1121 16:08:10.898760 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ek5c7p_f09e9308-5735-4731-968d-e6b202d380d8/pull/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.078196 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/util/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.473649 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/util/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.481069 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/pull/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.502903 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/pull/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.513067 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:08:11 crc kubenswrapper[4904]: E1121 16:08:11.513461 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.634643 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/util/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.679967 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/pull/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.726306 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92105lm5l_64f48be9-6d2a-4c5f-adb2-6b2bea485f9c/extract/0.log" Nov 21 16:08:11 crc kubenswrapper[4904]: I1121 16:08:11.859394 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/util/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.075534 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/pull/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.079282 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/util/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.086342 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/pull/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.269740 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/pull/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.310688 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/extract/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.311429 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fwkkgc_1060d3aa-9053-4e42-bc60-ff037f067cb9/util/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.488092 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-utilities/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.729227 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-content/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.769203 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-utilities/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.780507 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-content/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.966147 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-utilities/0.log" Nov 21 16:08:12 crc kubenswrapper[4904]: I1121 16:08:12.999179 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/extract-content/0.log" Nov 21 16:08:13 crc kubenswrapper[4904]: I1121 16:08:13.287284 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-utilities/0.log" Nov 21 16:08:13 crc kubenswrapper[4904]: I1121 16:08:13.548964 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-content/0.log" Nov 21 16:08:13 crc kubenswrapper[4904]: I1121 16:08:13.580354 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-utilities/0.log" Nov 21 16:08:13 crc kubenswrapper[4904]: I1121 16:08:13.596840 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-content/0.log" Nov 21 16:08:13 crc kubenswrapper[4904]: I1121 16:08:13.848589 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-utilities/0.log" Nov 21 16:08:13 crc kubenswrapper[4904]: I1121 16:08:13.949520 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/extract-content/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.143331 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/util/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.312312 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b7jgw_d55d1622-65de-445c-9699-33f122796f2e/registry-server/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.385405 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/util/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.422347 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/pull/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.446423 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/pull/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.667300 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/util/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.705505 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/pull/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.730354 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xzkb5_659c44b1-d13a-499e-96f2-3238040bfb51/extract/0.log" Nov 21 16:08:14 crc kubenswrapper[4904]: I1121 16:08:14.964372 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-ww7zw_dd6f0ea3-c491-4f01-a129-e2e0119808b7/marketplace-operator/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.033269 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-utilities/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.147252 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-djzs5_ba7785a9-be25-4320-a9ae-eb81a5a620e9/registry-server/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.245571 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-content/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.280609 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-utilities/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.306031 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-content/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.602121 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-utilities/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.728844 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-utilities/0.log" Nov 21 16:08:15 crc kubenswrapper[4904]: I1121 16:08:15.741859 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/extract-content/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.021299 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-utilities/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.025006 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-content/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.090231 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-content/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.107480 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-nw24d_80bb5b26-345d-4800-aabe-95a66a08ac79/registry-server/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.327158 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-content/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.395944 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/extract-utilities/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.584586 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/extract-utilities/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.610560 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/extract-content/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.645448 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/extract-content/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.648431 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/extract-utilities/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.887982 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/extract-utilities/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.897328 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/registry-server/0.log" Nov 21 16:08:16 crc kubenswrapper[4904]: I1121 16:08:16.904257 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pt6wx_239373d6-6821-45c8-86c6-9ec649f83ea4/extract-content/0.log" Nov 21 16:08:17 crc kubenswrapper[4904]: I1121 16:08:17.206708 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:08:17 crc kubenswrapper[4904]: I1121 16:08:17.262120 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:08:17 crc kubenswrapper[4904]: I1121 16:08:17.448521 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt6wx"] Nov 21 16:08:17 crc kubenswrapper[4904]: I1121 16:08:17.720425 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-7gch5_aa838181-d22a-4b07-b1f1-e7cd7b728745/registry-server/0.log" Nov 21 16:08:18 crc kubenswrapper[4904]: I1121 16:08:18.383598 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pt6wx" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" containerID="cri-o://7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b" gracePeriod=2 Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.234632 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.276771 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-utilities\") pod \"239373d6-6821-45c8-86c6-9ec649f83ea4\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.276912 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh6v6\" (UniqueName: \"kubernetes.io/projected/239373d6-6821-45c8-86c6-9ec649f83ea4-kube-api-access-bh6v6\") pod \"239373d6-6821-45c8-86c6-9ec649f83ea4\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.277169 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-catalog-content\") pod \"239373d6-6821-45c8-86c6-9ec649f83ea4\" (UID: \"239373d6-6821-45c8-86c6-9ec649f83ea4\") " Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.282483 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-utilities" (OuterVolumeSpecName: "utilities") pod "239373d6-6821-45c8-86c6-9ec649f83ea4" (UID: "239373d6-6821-45c8-86c6-9ec649f83ea4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.292914 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/239373d6-6821-45c8-86c6-9ec649f83ea4-kube-api-access-bh6v6" (OuterVolumeSpecName: "kube-api-access-bh6v6") pod "239373d6-6821-45c8-86c6-9ec649f83ea4" (UID: "239373d6-6821-45c8-86c6-9ec649f83ea4"). InnerVolumeSpecName "kube-api-access-bh6v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.379228 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.379269 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh6v6\" (UniqueName: \"kubernetes.io/projected/239373d6-6821-45c8-86c6-9ec649f83ea4-kube-api-access-bh6v6\") on node \"crc\" DevicePath \"\"" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.386977 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "239373d6-6821-45c8-86c6-9ec649f83ea4" (UID: "239373d6-6821-45c8-86c6-9ec649f83ea4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.393821 4904 generic.go:334] "Generic (PLEG): container finished" podID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerID="7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b" exitCode=0 Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.393867 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerDied","Data":"7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b"} Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.393892 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pt6wx" event={"ID":"239373d6-6821-45c8-86c6-9ec649f83ea4","Type":"ContainerDied","Data":"376fc3150f742e1ed0a8c292469080f877a823ceef71d81d40abc4db594fcd0d"} Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.393913 4904 scope.go:117] "RemoveContainer" containerID="7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.394049 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pt6wx" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.428565 4904 scope.go:117] "RemoveContainer" containerID="2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.436320 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pt6wx"] Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.448919 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pt6wx"] Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.455832 4904 scope.go:117] "RemoveContainer" containerID="f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.481182 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/239373d6-6821-45c8-86c6-9ec649f83ea4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.510261 4904 scope.go:117] "RemoveContainer" containerID="7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b" Nov 21 16:08:19 crc kubenswrapper[4904]: E1121 16:08:19.513584 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b\": container with ID starting with 7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b not found: ID does not exist" containerID="7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.513685 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b"} err="failed to get container status \"7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b\": rpc error: code = NotFound desc = could not find container \"7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b\": container with ID starting with 7203eea31c760ed864ff9792d3232ce238453ac6d7ab7408cce0918c0b97c67b not found: ID does not exist" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.513711 4904 scope.go:117] "RemoveContainer" containerID="2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723" Nov 21 16:08:19 crc kubenswrapper[4904]: E1121 16:08:19.514204 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723\": container with ID starting with 2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723 not found: ID does not exist" containerID="2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.514240 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723"} err="failed to get container status \"2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723\": rpc error: code = NotFound desc = could not find container \"2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723\": container with ID starting with 2651485792cc2a2decb360a9f01feb2ec43a85a750aaecb67db2c68c80d10723 not found: ID does not exist" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.514262 4904 scope.go:117] "RemoveContainer" containerID="f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8" Nov 21 16:08:19 crc kubenswrapper[4904]: E1121 16:08:19.514901 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8\": container with ID starting with f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8 not found: ID does not exist" containerID="f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8" Nov 21 16:08:19 crc kubenswrapper[4904]: I1121 16:08:19.515027 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8"} err="failed to get container status \"f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8\": rpc error: code = NotFound desc = could not find container \"f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8\": container with ID starting with f44815bf21f4ca6a13733296e919ccb172bb233177fe160aee6be2eabee832e8 not found: ID does not exist" Nov 21 16:08:20 crc kubenswrapper[4904]: I1121 16:08:20.525882 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" path="/var/lib/kubelet/pods/239373d6-6821-45c8-86c6-9ec649f83ea4/volumes" Nov 21 16:08:26 crc kubenswrapper[4904]: I1121 16:08:26.521964 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:08:26 crc kubenswrapper[4904]: E1121 16:08:26.522746 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:08:29 crc kubenswrapper[4904]: I1121 16:08:29.454811 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-gg6qn_c4f53eb1-20cb-4196-89e2-197cecdacc6c/prometheus-operator/0.log" Nov 21 16:08:29 crc kubenswrapper[4904]: I1121 16:08:29.652638 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c86f6999b-57ql7_7f97ec95-046c-4a0e-9ebf-3baf5fed1053/prometheus-operator-admission-webhook/0.log" Nov 21 16:08:29 crc kubenswrapper[4904]: I1121 16:08:29.686578 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5c86f6999b-sww5p_3d488c9a-40e8-4cce-a260-c4610af92de8/prometheus-operator-admission-webhook/0.log" Nov 21 16:08:29 crc kubenswrapper[4904]: I1121 16:08:29.895589 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-l4jvh_ebdd832d-d268-472a-b067-d61a6c520b7f/operator/0.log" Nov 21 16:08:29 crc kubenswrapper[4904]: I1121 16:08:29.936844 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-nfqjt_6a4ca754-03de-434a-b5ba-3f0d288b1e0c/observability-ui-dashboards/0.log" Nov 21 16:08:30 crc kubenswrapper[4904]: I1121 16:08:30.092984 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-pqc2t_2d39b52b-adba-414a-ba10-66181feecef9/perses-operator/0.log" Nov 21 16:08:40 crc kubenswrapper[4904]: I1121 16:08:40.515324 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:08:40 crc kubenswrapper[4904]: E1121 16:08:40.516096 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:08:42 crc kubenswrapper[4904]: I1121 16:08:42.871870 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/kube-rbac-proxy/0.log" Nov 21 16:08:42 crc kubenswrapper[4904]: I1121 16:08:42.957703 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5fc6c85b79-9lzch_d9a95797-5a30-4849-8c30-13f3ff99a4c2/manager/0.log" Nov 21 16:08:53 crc kubenswrapper[4904]: E1121 16:08:53.733018 4904 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.138:51580->38.102.83.138:45309: write tcp 38.102.83.138:51580->38.102.83.138:45309: write: broken pipe Nov 21 16:08:55 crc kubenswrapper[4904]: I1121 16:08:55.513872 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:08:55 crc kubenswrapper[4904]: E1121 16:08:55.514728 4904 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xb8tn_openshift-machine-config-operator(96e1548b-c40d-450b-a2f1-51e56c467178)\"" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" Nov 21 16:09:08 crc kubenswrapper[4904]: I1121 16:09:08.514093 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:09:08 crc kubenswrapper[4904]: I1121 16:09:08.898505 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"a5c9bab99d951776ab72466468299fe972f52af3081a0e25115223fb538d13e5"} Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.117542 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4xzw7"] Nov 21 16:09:17 crc kubenswrapper[4904]: E1121 16:09:17.119908 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="extract-utilities" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.120162 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="extract-utilities" Nov 21 16:09:17 crc kubenswrapper[4904]: E1121 16:09:17.120295 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.120307 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" Nov 21 16:09:17 crc kubenswrapper[4904]: E1121 16:09:17.120337 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="extract-content" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.120345 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="extract-content" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.121136 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="239373d6-6821-45c8-86c6-9ec649f83ea4" containerName="registry-server" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.123417 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.127892 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xzw7"] Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.244365 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-catalog-content\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.244422 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qfqv\" (UniqueName: \"kubernetes.io/projected/a24463fe-4844-429f-b80e-b4cbb4298096-kube-api-access-9qfqv\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.244521 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-utilities\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.347120 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-catalog-content\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.347183 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qfqv\" (UniqueName: \"kubernetes.io/projected/a24463fe-4844-429f-b80e-b4cbb4298096-kube-api-access-9qfqv\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.347266 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-utilities\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.348147 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-catalog-content\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.348169 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-utilities\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.389981 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qfqv\" (UniqueName: \"kubernetes.io/projected/a24463fe-4844-429f-b80e-b4cbb4298096-kube-api-access-9qfqv\") pod \"redhat-marketplace-4xzw7\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.453992 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:17 crc kubenswrapper[4904]: I1121 16:09:17.986004 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xzw7"] Nov 21 16:09:19 crc kubenswrapper[4904]: I1121 16:09:19.007403 4904 generic.go:334] "Generic (PLEG): container finished" podID="a24463fe-4844-429f-b80e-b4cbb4298096" containerID="c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b" exitCode=0 Nov 21 16:09:19 crc kubenswrapper[4904]: I1121 16:09:19.007530 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerDied","Data":"c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b"} Nov 21 16:09:19 crc kubenswrapper[4904]: I1121 16:09:19.014178 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerStarted","Data":"b65de1911a540336f3584b62697acaf07bce3494815bfbf4ea73e994bc1606ad"} Nov 21 16:09:20 crc kubenswrapper[4904]: I1121 16:09:20.029707 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerStarted","Data":"145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125"} Nov 21 16:09:21 crc kubenswrapper[4904]: I1121 16:09:21.058340 4904 generic.go:334] "Generic (PLEG): container finished" podID="a24463fe-4844-429f-b80e-b4cbb4298096" containerID="145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125" exitCode=0 Nov 21 16:09:21 crc kubenswrapper[4904]: I1121 16:09:21.058686 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerDied","Data":"145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125"} Nov 21 16:09:22 crc kubenswrapper[4904]: I1121 16:09:22.150919 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerStarted","Data":"0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993"} Nov 21 16:09:22 crc kubenswrapper[4904]: I1121 16:09:22.197826 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4xzw7" podStartSLOduration=2.5150290159999997 podStartE2EDuration="5.197799168s" podCreationTimestamp="2025-11-21 16:09:17 +0000 UTC" firstStartedPulling="2025-11-21 16:09:19.009108312 +0000 UTC m=+9433.130640864" lastFinishedPulling="2025-11-21 16:09:21.691878464 +0000 UTC m=+9435.813411016" observedRunningTime="2025-11-21 16:09:22.178880439 +0000 UTC m=+9436.300412992" watchObservedRunningTime="2025-11-21 16:09:22.197799168 +0000 UTC m=+9436.319331720" Nov 21 16:09:27 crc kubenswrapper[4904]: I1121 16:09:27.454753 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:27 crc kubenswrapper[4904]: I1121 16:09:27.455436 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:27 crc kubenswrapper[4904]: I1121 16:09:27.511884 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:28 crc kubenswrapper[4904]: I1121 16:09:28.269903 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:28 crc kubenswrapper[4904]: I1121 16:09:28.329439 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xzw7"] Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.235708 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4xzw7" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="registry-server" containerID="cri-o://0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993" gracePeriod=2 Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.767469 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.870278 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-utilities\") pod \"a24463fe-4844-429f-b80e-b4cbb4298096\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.870410 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-catalog-content\") pod \"a24463fe-4844-429f-b80e-b4cbb4298096\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.870479 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qfqv\" (UniqueName: \"kubernetes.io/projected/a24463fe-4844-429f-b80e-b4cbb4298096-kube-api-access-9qfqv\") pod \"a24463fe-4844-429f-b80e-b4cbb4298096\" (UID: \"a24463fe-4844-429f-b80e-b4cbb4298096\") " Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.871573 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-utilities" (OuterVolumeSpecName: "utilities") pod "a24463fe-4844-429f-b80e-b4cbb4298096" (UID: "a24463fe-4844-429f-b80e-b4cbb4298096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.877413 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24463fe-4844-429f-b80e-b4cbb4298096-kube-api-access-9qfqv" (OuterVolumeSpecName: "kube-api-access-9qfqv") pod "a24463fe-4844-429f-b80e-b4cbb4298096" (UID: "a24463fe-4844-429f-b80e-b4cbb4298096"). InnerVolumeSpecName "kube-api-access-9qfqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.890718 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a24463fe-4844-429f-b80e-b4cbb4298096" (UID: "a24463fe-4844-429f-b80e-b4cbb4298096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.973453 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.973494 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a24463fe-4844-429f-b80e-b4cbb4298096-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 16:09:30 crc kubenswrapper[4904]: I1121 16:09:30.973506 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qfqv\" (UniqueName: \"kubernetes.io/projected/a24463fe-4844-429f-b80e-b4cbb4298096-kube-api-access-9qfqv\") on node \"crc\" DevicePath \"\"" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.264450 4904 generic.go:334] "Generic (PLEG): container finished" podID="a24463fe-4844-429f-b80e-b4cbb4298096" containerID="0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993" exitCode=0 Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.264500 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerDied","Data":"0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993"} Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.264545 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4xzw7" event={"ID":"a24463fe-4844-429f-b80e-b4cbb4298096","Type":"ContainerDied","Data":"b65de1911a540336f3584b62697acaf07bce3494815bfbf4ea73e994bc1606ad"} Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.264567 4904 scope.go:117] "RemoveContainer" containerID="0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.265947 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4xzw7" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.297674 4904 scope.go:117] "RemoveContainer" containerID="145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.307207 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xzw7"] Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.318024 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4xzw7"] Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.329739 4904 scope.go:117] "RemoveContainer" containerID="c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.388069 4904 scope.go:117] "RemoveContainer" containerID="0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993" Nov 21 16:09:31 crc kubenswrapper[4904]: E1121 16:09:31.388705 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993\": container with ID starting with 0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993 not found: ID does not exist" containerID="0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.388875 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993"} err="failed to get container status \"0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993\": rpc error: code = NotFound desc = could not find container \"0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993\": container with ID starting with 0cb2fda9755dd1ece8ede72244e8abbc6dfc2b9002cd33361bdda74fe9ce4993 not found: ID does not exist" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.389002 4904 scope.go:117] "RemoveContainer" containerID="145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125" Nov 21 16:09:31 crc kubenswrapper[4904]: E1121 16:09:31.389628 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125\": container with ID starting with 145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125 not found: ID does not exist" containerID="145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.389708 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125"} err="failed to get container status \"145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125\": rpc error: code = NotFound desc = could not find container \"145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125\": container with ID starting with 145515a949f71f873a9399f162b2b5ea711b6a370025ab6a53ec068a2c396125 not found: ID does not exist" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.389758 4904 scope.go:117] "RemoveContainer" containerID="c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b" Nov 21 16:09:31 crc kubenswrapper[4904]: E1121 16:09:31.390270 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b\": container with ID starting with c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b not found: ID does not exist" containerID="c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b" Nov 21 16:09:31 crc kubenswrapper[4904]: I1121 16:09:31.390393 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b"} err="failed to get container status \"c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b\": rpc error: code = NotFound desc = could not find container \"c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b\": container with ID starting with c842b1872c70d5d67b106d4b701c89443f31dfd527ca81987c78b51e99460f3b not found: ID does not exist" Nov 21 16:09:32 crc kubenswrapper[4904]: I1121 16:09:32.526823 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" path="/var/lib/kubelet/pods/a24463fe-4844-429f-b80e-b4cbb4298096/volumes" Nov 21 16:09:36 crc kubenswrapper[4904]: I1121 16:09:36.779586 4904 scope.go:117] "RemoveContainer" containerID="e2f18d02d26a56f000512e37f496ee408eb7c9c0027e3e0c5aee39b0691876c4" Nov 21 16:10:53 crc kubenswrapper[4904]: I1121 16:10:53.112385 4904 generic.go:334] "Generic (PLEG): container finished" podID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerID="3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc" exitCode=0 Nov 21 16:10:53 crc kubenswrapper[4904]: I1121 16:10:53.112497 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qq87d/must-gather-4b5lz" event={"ID":"b29dc509-2e94-4042-950a-d37c24cef5a0","Type":"ContainerDied","Data":"3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc"} Nov 21 16:10:53 crc kubenswrapper[4904]: I1121 16:10:53.118398 4904 scope.go:117] "RemoveContainer" containerID="3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc" Nov 21 16:10:53 crc kubenswrapper[4904]: I1121 16:10:53.189539 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq87d_must-gather-4b5lz_b29dc509-2e94-4042-950a-d37c24cef5a0/gather/0.log" Nov 21 16:11:05 crc kubenswrapper[4904]: I1121 16:11:05.984276 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qq87d/must-gather-4b5lz"] Nov 21 16:11:05 crc kubenswrapper[4904]: I1121 16:11:05.986270 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qq87d/must-gather-4b5lz" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="copy" containerID="cri-o://16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8" gracePeriod=2 Nov 21 16:11:05 crc kubenswrapper[4904]: I1121 16:11:05.998529 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qq87d/must-gather-4b5lz"] Nov 21 16:11:06 crc kubenswrapper[4904]: I1121 16:11:06.876229 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq87d_must-gather-4b5lz_b29dc509-2e94-4042-950a-d37c24cef5a0/copy/0.log" Nov 21 16:11:06 crc kubenswrapper[4904]: I1121 16:11:06.877290 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:11:06 crc kubenswrapper[4904]: I1121 16:11:06.984293 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmcjr\" (UniqueName: \"kubernetes.io/projected/b29dc509-2e94-4042-950a-d37c24cef5a0-kube-api-access-qmcjr\") pod \"b29dc509-2e94-4042-950a-d37c24cef5a0\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " Nov 21 16:11:06 crc kubenswrapper[4904]: I1121 16:11:06.984612 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b29dc509-2e94-4042-950a-d37c24cef5a0-must-gather-output\") pod \"b29dc509-2e94-4042-950a-d37c24cef5a0\" (UID: \"b29dc509-2e94-4042-950a-d37c24cef5a0\") " Nov 21 16:11:06 crc kubenswrapper[4904]: I1121 16:11:06.991994 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b29dc509-2e94-4042-950a-d37c24cef5a0-kube-api-access-qmcjr" (OuterVolumeSpecName: "kube-api-access-qmcjr") pod "b29dc509-2e94-4042-950a-d37c24cef5a0" (UID: "b29dc509-2e94-4042-950a-d37c24cef5a0"). InnerVolumeSpecName "kube-api-access-qmcjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.086803 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmcjr\" (UniqueName: \"kubernetes.io/projected/b29dc509-2e94-4042-950a-d37c24cef5a0-kube-api-access-qmcjr\") on node \"crc\" DevicePath \"\"" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.171128 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b29dc509-2e94-4042-950a-d37c24cef5a0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b29dc509-2e94-4042-950a-d37c24cef5a0" (UID: "b29dc509-2e94-4042-950a-d37c24cef5a0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.189255 4904 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b29dc509-2e94-4042-950a-d37c24cef5a0-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.258678 4904 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qq87d_must-gather-4b5lz_b29dc509-2e94-4042-950a-d37c24cef5a0/copy/0.log" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.259723 4904 generic.go:334] "Generic (PLEG): container finished" podID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerID="16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8" exitCode=143 Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.259808 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qq87d/must-gather-4b5lz" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.259807 4904 scope.go:117] "RemoveContainer" containerID="16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.287683 4904 scope.go:117] "RemoveContainer" containerID="3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.330543 4904 scope.go:117] "RemoveContainer" containerID="16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8" Nov 21 16:11:07 crc kubenswrapper[4904]: E1121 16:11:07.331109 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8\": container with ID starting with 16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8 not found: ID does not exist" containerID="16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.331229 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8"} err="failed to get container status \"16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8\": rpc error: code = NotFound desc = could not find container \"16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8\": container with ID starting with 16abc899d1dd82f1738171bbb8e94bf7274b1704910dbddf445b25fe8f1778c8 not found: ID does not exist" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.331334 4904 scope.go:117] "RemoveContainer" containerID="3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc" Nov 21 16:11:07 crc kubenswrapper[4904]: E1121 16:11:07.331809 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc\": container with ID starting with 3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc not found: ID does not exist" containerID="3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc" Nov 21 16:11:07 crc kubenswrapper[4904]: I1121 16:11:07.331870 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc"} err="failed to get container status \"3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc\": rpc error: code = NotFound desc = could not find container \"3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc\": container with ID starting with 3daab5198ee0a4db6edc05b568f46ab3f72b37a114d9e453adba4b298c2253bc not found: ID does not exist" Nov 21 16:11:08 crc kubenswrapper[4904]: I1121 16:11:08.525886 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" path="/var/lib/kubelet/pods/b29dc509-2e94-4042-950a-d37c24cef5a0/volumes" Nov 21 16:11:28 crc kubenswrapper[4904]: I1121 16:11:28.113644 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:11:28 crc kubenswrapper[4904]: I1121 16:11:28.114208 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:11:36 crc kubenswrapper[4904]: I1121 16:11:36.960352 4904 scope.go:117] "RemoveContainer" containerID="a4f86ded64ae4a1dea8b560dd0f2b854035f76a4e93407f63f8ebd84989c9fa8" Nov 21 16:11:58 crc kubenswrapper[4904]: I1121 16:11:58.113581 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:11:58 crc kubenswrapper[4904]: I1121 16:11:58.114186 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:12:28 crc kubenswrapper[4904]: I1121 16:12:28.113881 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:12:28 crc kubenswrapper[4904]: I1121 16:12:28.114394 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:12:28 crc kubenswrapper[4904]: I1121 16:12:28.114447 4904 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" Nov 21 16:12:28 crc kubenswrapper[4904]: I1121 16:12:28.115357 4904 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5c9bab99d951776ab72466468299fe972f52af3081a0e25115223fb538d13e5"} pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 21 16:12:28 crc kubenswrapper[4904]: I1121 16:12:28.115409 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" containerID="cri-o://a5c9bab99d951776ab72466468299fe972f52af3081a0e25115223fb538d13e5" gracePeriod=600 Nov 21 16:12:29 crc kubenswrapper[4904]: I1121 16:12:29.080144 4904 generic.go:334] "Generic (PLEG): container finished" podID="96e1548b-c40d-450b-a2f1-51e56c467178" containerID="a5c9bab99d951776ab72466468299fe972f52af3081a0e25115223fb538d13e5" exitCode=0 Nov 21 16:12:29 crc kubenswrapper[4904]: I1121 16:12:29.080230 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerDied","Data":"a5c9bab99d951776ab72466468299fe972f52af3081a0e25115223fb538d13e5"} Nov 21 16:12:29 crc kubenswrapper[4904]: I1121 16:12:29.080929 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" event={"ID":"96e1548b-c40d-450b-a2f1-51e56c467178","Type":"ContainerStarted","Data":"f97d4c8a702cd8325287eebf3d7598a00bc2e9fcdf1cb137db44cd3efbe79ee2"} Nov 21 16:12:29 crc kubenswrapper[4904]: I1121 16:12:29.080956 4904 scope.go:117] "RemoveContainer" containerID="fbfbad4ef5e8b936402332855cf773111c5144075bf9a8e7a1883b8e6a187b14" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.628720 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hdgs4"] Nov 21 16:12:47 crc kubenswrapper[4904]: E1121 16:12:47.629897 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="extract-content" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.629916 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="extract-content" Nov 21 16:12:47 crc kubenswrapper[4904]: E1121 16:12:47.629954 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="copy" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.629961 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="copy" Nov 21 16:12:47 crc kubenswrapper[4904]: E1121 16:12:47.629977 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="registry-server" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.629985 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="registry-server" Nov 21 16:12:47 crc kubenswrapper[4904]: E1121 16:12:47.630003 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="gather" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.630025 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="gather" Nov 21 16:12:47 crc kubenswrapper[4904]: E1121 16:12:47.630065 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="extract-utilities" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.630073 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="extract-utilities" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.630337 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24463fe-4844-429f-b80e-b4cbb4298096" containerName="registry-server" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.630354 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="copy" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.630390 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="b29dc509-2e94-4042-950a-d37c24cef5a0" containerName="gather" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.632098 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.641926 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdgs4"] Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.756843 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-utilities\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.757015 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pjbk\" (UniqueName: \"kubernetes.io/projected/1edb681e-ad89-43ec-bd28-ced684aef607-kube-api-access-4pjbk\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.757271 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-catalog-content\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.859040 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pjbk\" (UniqueName: \"kubernetes.io/projected/1edb681e-ad89-43ec-bd28-ced684aef607-kube-api-access-4pjbk\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.859178 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-catalog-content\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.859204 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-utilities\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.859721 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-catalog-content\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.859796 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-utilities\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.890486 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pjbk\" (UniqueName: \"kubernetes.io/projected/1edb681e-ad89-43ec-bd28-ced684aef607-kube-api-access-4pjbk\") pod \"certified-operators-hdgs4\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:47 crc kubenswrapper[4904]: I1121 16:12:47.973641 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.510079 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdgs4"] Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.619289 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgzl5"] Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.622342 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.629809 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgzl5"] Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.780918 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vpqh\" (UniqueName: \"kubernetes.io/projected/07e30e47-ad79-4646-a591-442ce5a16a4a-kube-api-access-8vpqh\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.781060 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-utilities\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.781084 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-catalog-content\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.883602 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vpqh\" (UniqueName: \"kubernetes.io/projected/07e30e47-ad79-4646-a591-442ce5a16a4a-kube-api-access-8vpqh\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.883996 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-utilities\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.884014 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-catalog-content\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.884531 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-utilities\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.884579 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-catalog-content\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.902973 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vpqh\" (UniqueName: \"kubernetes.io/projected/07e30e47-ad79-4646-a591-442ce5a16a4a-kube-api-access-8vpqh\") pod \"community-operators-jgzl5\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:48 crc kubenswrapper[4904]: I1121 16:12:48.966495 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:49 crc kubenswrapper[4904]: I1121 16:12:49.278819 4904 generic.go:334] "Generic (PLEG): container finished" podID="1edb681e-ad89-43ec-bd28-ced684aef607" containerID="7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1" exitCode=0 Nov 21 16:12:49 crc kubenswrapper[4904]: I1121 16:12:49.278894 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerDied","Data":"7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1"} Nov 21 16:12:49 crc kubenswrapper[4904]: I1121 16:12:49.279116 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerStarted","Data":"68bc2c39a149e6660efcc336b1820d8c2e0bd430ee20686117effbd20436dd77"} Nov 21 16:12:49 crc kubenswrapper[4904]: I1121 16:12:49.281168 4904 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 21 16:12:49 crc kubenswrapper[4904]: I1121 16:12:49.470643 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgzl5"] Nov 21 16:12:50 crc kubenswrapper[4904]: I1121 16:12:50.291856 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerStarted","Data":"9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7"} Nov 21 16:12:50 crc kubenswrapper[4904]: I1121 16:12:50.294226 4904 generic.go:334] "Generic (PLEG): container finished" podID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerID="8481887c4a3e47d4c43615000a54a848cd88b75881085d2d457c377ace13c470" exitCode=0 Nov 21 16:12:50 crc kubenswrapper[4904]: I1121 16:12:50.294256 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerDied","Data":"8481887c4a3e47d4c43615000a54a848cd88b75881085d2d457c377ace13c470"} Nov 21 16:12:50 crc kubenswrapper[4904]: I1121 16:12:50.294273 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerStarted","Data":"7e6e482d065eab90660f0d58c4c6593d91284816a75194669f040c4fe9ed798d"} Nov 21 16:12:51 crc kubenswrapper[4904]: I1121 16:12:51.305928 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerStarted","Data":"e65de52f547b752f1cb93a3d845ccf26793ea6a82561ae4abc249cf6e1dc5a47"} Nov 21 16:12:52 crc kubenswrapper[4904]: I1121 16:12:52.317416 4904 generic.go:334] "Generic (PLEG): container finished" podID="1edb681e-ad89-43ec-bd28-ced684aef607" containerID="9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7" exitCode=0 Nov 21 16:12:52 crc kubenswrapper[4904]: I1121 16:12:52.317515 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerDied","Data":"9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7"} Nov 21 16:12:53 crc kubenswrapper[4904]: I1121 16:12:53.331483 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerStarted","Data":"8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3"} Nov 21 16:12:53 crc kubenswrapper[4904]: I1121 16:12:53.355481 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hdgs4" podStartSLOduration=2.6637627630000003 podStartE2EDuration="6.355458259s" podCreationTimestamp="2025-11-21 16:12:47 +0000 UTC" firstStartedPulling="2025-11-21 16:12:49.28093067 +0000 UTC m=+9643.402463222" lastFinishedPulling="2025-11-21 16:12:52.972626166 +0000 UTC m=+9647.094158718" observedRunningTime="2025-11-21 16:12:53.347046435 +0000 UTC m=+9647.468579017" watchObservedRunningTime="2025-11-21 16:12:53.355458259 +0000 UTC m=+9647.476990831" Nov 21 16:12:54 crc kubenswrapper[4904]: I1121 16:12:54.343556 4904 generic.go:334] "Generic (PLEG): container finished" podID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerID="e65de52f547b752f1cb93a3d845ccf26793ea6a82561ae4abc249cf6e1dc5a47" exitCode=0 Nov 21 16:12:54 crc kubenswrapper[4904]: I1121 16:12:54.343642 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerDied","Data":"e65de52f547b752f1cb93a3d845ccf26793ea6a82561ae4abc249cf6e1dc5a47"} Nov 21 16:12:55 crc kubenswrapper[4904]: I1121 16:12:55.358504 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerStarted","Data":"43df9912d4c9c9751cb8cc76f52ff42e35af3ac9abd74390211f572f5bd6795b"} Nov 21 16:12:55 crc kubenswrapper[4904]: I1121 16:12:55.384255 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgzl5" podStartSLOduration=2.92804858 podStartE2EDuration="7.384235643s" podCreationTimestamp="2025-11-21 16:12:48 +0000 UTC" firstStartedPulling="2025-11-21 16:12:50.295453546 +0000 UTC m=+9644.416986098" lastFinishedPulling="2025-11-21 16:12:54.751640609 +0000 UTC m=+9648.873173161" observedRunningTime="2025-11-21 16:12:55.377701284 +0000 UTC m=+9649.499233846" watchObservedRunningTime="2025-11-21 16:12:55.384235643 +0000 UTC m=+9649.505768195" Nov 21 16:12:57 crc kubenswrapper[4904]: I1121 16:12:57.974212 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:57 crc kubenswrapper[4904]: I1121 16:12:57.974604 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:12:58 crc kubenswrapper[4904]: I1121 16:12:58.967217 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:58 crc kubenswrapper[4904]: I1121 16:12:58.967538 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:12:59 crc kubenswrapper[4904]: I1121 16:12:59.019733 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hdgs4" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="registry-server" probeResult="failure" output=< Nov 21 16:12:59 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 16:12:59 crc kubenswrapper[4904]: > Nov 21 16:13:00 crc kubenswrapper[4904]: I1121 16:13:00.012249 4904 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jgzl5" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="registry-server" probeResult="failure" output=< Nov 21 16:13:00 crc kubenswrapper[4904]: timeout: failed to connect service ":50051" within 1s Nov 21 16:13:00 crc kubenswrapper[4904]: > Nov 21 16:13:08 crc kubenswrapper[4904]: I1121 16:13:08.024932 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:13:08 crc kubenswrapper[4904]: I1121 16:13:08.077727 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:13:08 crc kubenswrapper[4904]: I1121 16:13:08.259843 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdgs4"] Nov 21 16:13:09 crc kubenswrapper[4904]: I1121 16:13:09.016325 4904 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:13:09 crc kubenswrapper[4904]: I1121 16:13:09.063457 4904 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:13:09 crc kubenswrapper[4904]: I1121 16:13:09.503455 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hdgs4" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="registry-server" containerID="cri-o://8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3" gracePeriod=2 Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.094723 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.164726 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pjbk\" (UniqueName: \"kubernetes.io/projected/1edb681e-ad89-43ec-bd28-ced684aef607-kube-api-access-4pjbk\") pod \"1edb681e-ad89-43ec-bd28-ced684aef607\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.164863 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-catalog-content\") pod \"1edb681e-ad89-43ec-bd28-ced684aef607\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.164971 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-utilities\") pod \"1edb681e-ad89-43ec-bd28-ced684aef607\" (UID: \"1edb681e-ad89-43ec-bd28-ced684aef607\") " Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.166832 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-utilities" (OuterVolumeSpecName: "utilities") pod "1edb681e-ad89-43ec-bd28-ced684aef607" (UID: "1edb681e-ad89-43ec-bd28-ced684aef607"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.177215 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edb681e-ad89-43ec-bd28-ced684aef607-kube-api-access-4pjbk" (OuterVolumeSpecName: "kube-api-access-4pjbk") pod "1edb681e-ad89-43ec-bd28-ced684aef607" (UID: "1edb681e-ad89-43ec-bd28-ced684aef607"). InnerVolumeSpecName "kube-api-access-4pjbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.234372 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1edb681e-ad89-43ec-bd28-ced684aef607" (UID: "1edb681e-ad89-43ec-bd28-ced684aef607"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.267622 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.267676 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pjbk\" (UniqueName: \"kubernetes.io/projected/1edb681e-ad89-43ec-bd28-ced684aef607-kube-api-access-4pjbk\") on node \"crc\" DevicePath \"\"" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.267690 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1edb681e-ad89-43ec-bd28-ced684aef607-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.522263 4904 generic.go:334] "Generic (PLEG): container finished" podID="1edb681e-ad89-43ec-bd28-ced684aef607" containerID="8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3" exitCode=0 Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.522399 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdgs4" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.531795 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerDied","Data":"8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3"} Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.531876 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdgs4" event={"ID":"1edb681e-ad89-43ec-bd28-ced684aef607","Type":"ContainerDied","Data":"68bc2c39a149e6660efcc336b1820d8c2e0bd430ee20686117effbd20436dd77"} Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.531910 4904 scope.go:117] "RemoveContainer" containerID="8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.565257 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdgs4"] Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.572869 4904 scope.go:117] "RemoveContainer" containerID="9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.579815 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hdgs4"] Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.600986 4904 scope.go:117] "RemoveContainer" containerID="7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.662618 4904 scope.go:117] "RemoveContainer" containerID="8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3" Nov 21 16:13:10 crc kubenswrapper[4904]: E1121 16:13:10.663261 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3\": container with ID starting with 8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3 not found: ID does not exist" containerID="8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.663310 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3"} err="failed to get container status \"8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3\": rpc error: code = NotFound desc = could not find container \"8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3\": container with ID starting with 8c4fd1d72c2e7bfbddb66a283709fab076f610f0f3d03af23b76b0ebb8b238e3 not found: ID does not exist" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.663338 4904 scope.go:117] "RemoveContainer" containerID="9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7" Nov 21 16:13:10 crc kubenswrapper[4904]: E1121 16:13:10.663906 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7\": container with ID starting with 9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7 not found: ID does not exist" containerID="9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.663930 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7"} err="failed to get container status \"9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7\": rpc error: code = NotFound desc = could not find container \"9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7\": container with ID starting with 9cd79f0d5b7b1de7e142b98141199ae23582bff675cdfb207af0f076aa422be7 not found: ID does not exist" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.663945 4904 scope.go:117] "RemoveContainer" containerID="7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1" Nov 21 16:13:10 crc kubenswrapper[4904]: E1121 16:13:10.664406 4904 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1\": container with ID starting with 7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1 not found: ID does not exist" containerID="7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1" Nov 21 16:13:10 crc kubenswrapper[4904]: I1121 16:13:10.664431 4904 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1"} err="failed to get container status \"7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1\": rpc error: code = NotFound desc = could not find container \"7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1\": container with ID starting with 7c76815f638137a20c1cb2b30f41a79a41ac0b464ab15150028150a05191deb1 not found: ID does not exist" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.058986 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgzl5"] Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.059213 4904 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgzl5" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="registry-server" containerID="cri-o://43df9912d4c9c9751cb8cc76f52ff42e35af3ac9abd74390211f572f5bd6795b" gracePeriod=2 Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.536749 4904 generic.go:334] "Generic (PLEG): container finished" podID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerID="43df9912d4c9c9751cb8cc76f52ff42e35af3ac9abd74390211f572f5bd6795b" exitCode=0 Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.536981 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerDied","Data":"43df9912d4c9c9751cb8cc76f52ff42e35af3ac9abd74390211f572f5bd6795b"} Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.537191 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgzl5" event={"ID":"07e30e47-ad79-4646-a591-442ce5a16a4a","Type":"ContainerDied","Data":"7e6e482d065eab90660f0d58c4c6593d91284816a75194669f040c4fe9ed798d"} Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.537204 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e6e482d065eab90660f0d58c4c6593d91284816a75194669f040c4fe9ed798d" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.550202 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.698217 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vpqh\" (UniqueName: \"kubernetes.io/projected/07e30e47-ad79-4646-a591-442ce5a16a4a-kube-api-access-8vpqh\") pod \"07e30e47-ad79-4646-a591-442ce5a16a4a\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.698422 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-utilities\") pod \"07e30e47-ad79-4646-a591-442ce5a16a4a\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.698512 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-catalog-content\") pod \"07e30e47-ad79-4646-a591-442ce5a16a4a\" (UID: \"07e30e47-ad79-4646-a591-442ce5a16a4a\") " Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.698981 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-utilities" (OuterVolumeSpecName: "utilities") pod "07e30e47-ad79-4646-a591-442ce5a16a4a" (UID: "07e30e47-ad79-4646-a591-442ce5a16a4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.699341 4904 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-utilities\") on node \"crc\" DevicePath \"\"" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.705143 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e30e47-ad79-4646-a591-442ce5a16a4a-kube-api-access-8vpqh" (OuterVolumeSpecName: "kube-api-access-8vpqh") pod "07e30e47-ad79-4646-a591-442ce5a16a4a" (UID: "07e30e47-ad79-4646-a591-442ce5a16a4a"). InnerVolumeSpecName "kube-api-access-8vpqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.749357 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07e30e47-ad79-4646-a591-442ce5a16a4a" (UID: "07e30e47-ad79-4646-a591-442ce5a16a4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.801177 4904 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e30e47-ad79-4646-a591-442ce5a16a4a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 21 16:13:11 crc kubenswrapper[4904]: I1121 16:13:11.801210 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vpqh\" (UniqueName: \"kubernetes.io/projected/07e30e47-ad79-4646-a591-442ce5a16a4a-kube-api-access-8vpqh\") on node \"crc\" DevicePath \"\"" Nov 21 16:13:12 crc kubenswrapper[4904]: I1121 16:13:12.524332 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" path="/var/lib/kubelet/pods/1edb681e-ad89-43ec-bd28-ced684aef607/volumes" Nov 21 16:13:12 crc kubenswrapper[4904]: I1121 16:13:12.547342 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgzl5" Nov 21 16:13:12 crc kubenswrapper[4904]: I1121 16:13:12.569988 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgzl5"] Nov 21 16:13:12 crc kubenswrapper[4904]: I1121 16:13:12.579057 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgzl5"] Nov 21 16:13:14 crc kubenswrapper[4904]: I1121 16:13:14.529246 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" path="/var/lib/kubelet/pods/07e30e47-ad79-4646-a591-442ce5a16a4a/volumes" Nov 21 16:14:28 crc kubenswrapper[4904]: I1121 16:14:28.113491 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:14:28 crc kubenswrapper[4904]: I1121 16:14:28.114056 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:14:58 crc kubenswrapper[4904]: I1121 16:14:58.113792 4904 patch_prober.go:28] interesting pod/machine-config-daemon-xb8tn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 21 16:14:58 crc kubenswrapper[4904]: I1121 16:14:58.114326 4904 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xb8tn" podUID="96e1548b-c40d-450b-a2f1-51e56c467178" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.173587 4904 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p"] Nov 21 16:15:00 crc kubenswrapper[4904]: E1121 16:15:00.174184 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="extract-content" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174203 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="extract-content" Nov 21 16:15:00 crc kubenswrapper[4904]: E1121 16:15:00.174237 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="extract-utilities" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174245 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="extract-utilities" Nov 21 16:15:00 crc kubenswrapper[4904]: E1121 16:15:00.174283 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="extract-utilities" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174291 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="extract-utilities" Nov 21 16:15:00 crc kubenswrapper[4904]: E1121 16:15:00.174302 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="extract-content" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174310 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="extract-content" Nov 21 16:15:00 crc kubenswrapper[4904]: E1121 16:15:00.174323 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="registry-server" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174329 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="registry-server" Nov 21 16:15:00 crc kubenswrapper[4904]: E1121 16:15:00.174344 4904 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="registry-server" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174351 4904 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="registry-server" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174637 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edb681e-ad89-43ec-bd28-ced684aef607" containerName="registry-server" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.174727 4904 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e30e47-ad79-4646-a591-442ce5a16a4a" containerName="registry-server" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.176463 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.185835 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p"] Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.188068 4904 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.188067 4904 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.284455 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/415465d5-eb36-4b26-9965-c7f9158b9ea1-secret-volume\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.284800 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5bt\" (UniqueName: \"kubernetes.io/projected/415465d5-eb36-4b26-9965-c7f9158b9ea1-kube-api-access-cg5bt\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.284901 4904 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/415465d5-eb36-4b26-9965-c7f9158b9ea1-config-volume\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.387371 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/415465d5-eb36-4b26-9965-c7f9158b9ea1-secret-volume\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.387598 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg5bt\" (UniqueName: \"kubernetes.io/projected/415465d5-eb36-4b26-9965-c7f9158b9ea1-kube-api-access-cg5bt\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.387707 4904 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/415465d5-eb36-4b26-9965-c7f9158b9ea1-config-volume\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.388912 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/415465d5-eb36-4b26-9965-c7f9158b9ea1-config-volume\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.395990 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/415465d5-eb36-4b26-9965-c7f9158b9ea1-secret-volume\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.407290 4904 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg5bt\" (UniqueName: \"kubernetes.io/projected/415465d5-eb36-4b26-9965-c7f9158b9ea1-kube-api-access-cg5bt\") pod \"collect-profiles-29395695-lsm5p\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.506078 4904 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:00 crc kubenswrapper[4904]: I1121 16:15:00.973455 4904 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p"] Nov 21 16:15:01 crc kubenswrapper[4904]: I1121 16:15:01.651509 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" event={"ID":"415465d5-eb36-4b26-9965-c7f9158b9ea1","Type":"ContainerStarted","Data":"3ed60a0707f6abce59c37489b82625fc0cec35569a38b8ed9294061b8b21ba56"} Nov 21 16:15:01 crc kubenswrapper[4904]: I1121 16:15:01.651819 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" event={"ID":"415465d5-eb36-4b26-9965-c7f9158b9ea1","Type":"ContainerStarted","Data":"3a86e32585548cd05116f6cf7fdf53be222c5ecb0af7c71502879fdee6c1c620"} Nov 21 16:15:01 crc kubenswrapper[4904]: I1121 16:15:01.671435 4904 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" podStartSLOduration=1.67141691 podStartE2EDuration="1.67141691s" podCreationTimestamp="2025-11-21 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-21 16:15:01.667709021 +0000 UTC m=+9775.789241573" watchObservedRunningTime="2025-11-21 16:15:01.67141691 +0000 UTC m=+9775.792949462" Nov 21 16:15:02 crc kubenswrapper[4904]: I1121 16:15:02.664983 4904 generic.go:334] "Generic (PLEG): container finished" podID="415465d5-eb36-4b26-9965-c7f9158b9ea1" containerID="3ed60a0707f6abce59c37489b82625fc0cec35569a38b8ed9294061b8b21ba56" exitCode=0 Nov 21 16:15:02 crc kubenswrapper[4904]: I1121 16:15:02.665103 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" event={"ID":"415465d5-eb36-4b26-9965-c7f9158b9ea1","Type":"ContainerDied","Data":"3ed60a0707f6abce59c37489b82625fc0cec35569a38b8ed9294061b8b21ba56"} Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.163445 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.300190 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/415465d5-eb36-4b26-9965-c7f9158b9ea1-secret-volume\") pod \"415465d5-eb36-4b26-9965-c7f9158b9ea1\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.300303 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg5bt\" (UniqueName: \"kubernetes.io/projected/415465d5-eb36-4b26-9965-c7f9158b9ea1-kube-api-access-cg5bt\") pod \"415465d5-eb36-4b26-9965-c7f9158b9ea1\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.300396 4904 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/415465d5-eb36-4b26-9965-c7f9158b9ea1-config-volume\") pod \"415465d5-eb36-4b26-9965-c7f9158b9ea1\" (UID: \"415465d5-eb36-4b26-9965-c7f9158b9ea1\") " Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.301359 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/415465d5-eb36-4b26-9965-c7f9158b9ea1-config-volume" (OuterVolumeSpecName: "config-volume") pod "415465d5-eb36-4b26-9965-c7f9158b9ea1" (UID: "415465d5-eb36-4b26-9965-c7f9158b9ea1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.302445 4904 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/415465d5-eb36-4b26-9965-c7f9158b9ea1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.307677 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415465d5-eb36-4b26-9965-c7f9158b9ea1-kube-api-access-cg5bt" (OuterVolumeSpecName: "kube-api-access-cg5bt") pod "415465d5-eb36-4b26-9965-c7f9158b9ea1" (UID: "415465d5-eb36-4b26-9965-c7f9158b9ea1"). InnerVolumeSpecName "kube-api-access-cg5bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.308474 4904 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415465d5-eb36-4b26-9965-c7f9158b9ea1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "415465d5-eb36-4b26-9965-c7f9158b9ea1" (UID: "415465d5-eb36-4b26-9965-c7f9158b9ea1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.405935 4904 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/415465d5-eb36-4b26-9965-c7f9158b9ea1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.405998 4904 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg5bt\" (UniqueName: \"kubernetes.io/projected/415465d5-eb36-4b26-9965-c7f9158b9ea1-kube-api-access-cg5bt\") on node \"crc\" DevicePath \"\"" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.691898 4904 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" event={"ID":"415465d5-eb36-4b26-9965-c7f9158b9ea1","Type":"ContainerDied","Data":"3a86e32585548cd05116f6cf7fdf53be222c5ecb0af7c71502879fdee6c1c620"} Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.691960 4904 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a86e32585548cd05116f6cf7fdf53be222c5ecb0af7c71502879fdee6c1c620" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.691965 4904 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29395695-lsm5p" Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.759029 4904 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94"] Nov 21 16:15:04 crc kubenswrapper[4904]: I1121 16:15:04.768916 4904 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29395650-tjw94"] Nov 21 16:15:06 crc kubenswrapper[4904]: I1121 16:15:06.530781 4904 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc47cd9-fd7e-44fd-9af6-41e34c400ff8" path="/var/lib/kubelet/pods/ecc47cd9-fd7e-44fd-9af6-41e34c400ff8/volumes"